This is about the processors, not the laptops, so commenting on the chips instead. They look great, but they look like they're the M1 design, just more of it. Which is plenty for a laptop! But it'll be interesting to see what they'll do for their desktops.
Most of the additional chip area went into more GPUs and special-purpose video codec hardware. It's "just" two more cores than the vanilla M1, and some of the efficiency cores on the M1 became performance cores. So CPU-bound things like compiling code will be "only" 20-50% faster than on the M1 MacBook. The big wins are for GPU-heavy and codec-heavy workloads.
That makes sense since that's where most users will need their performance. I'm still a bit sad that the era of "general purpose computing" where CPU can do all workloads is coming to an end.
Nevertheless, impressive chips, I'm very curious where they'll take it for the Mac Pro, and (hopefully) the iMac Pro.
Total cores, but going from 4 "high performance" and 4 "efficiency" to 8 "high performance" and 2 "efficiency. So should be more dramatic increase in performance than "20% more cores" would provide.
Yes. But the 14" and 16" has larger battery than 13" MacBook Pro or Air. And they were designed for performance, so two less EE core doesn't matter as much.
It is also important to note, despite the name with M1, we dont know if the CPU core are the same as the one used in M1 / A14. Or did they used A15 design where the energy efficient core had significant improvement. Since the Video Decoder used in M1 Pro and Max seems to be from A15, the LPDDR5 is also a new memory controller.
In A15, Anandtech claims the Efficiency cores are 1/3 the performance, but 1/10 the power. They should be looking at (effectively) doubling the power consumption over M1 with just the CPUs and assuming they don't increase clockspeeds.
Going from 8 to 16 or 32 GPU cores is another massive power increase.
I wonder if Apple will give us a 'long-haul' mode where the system is locked to only the energy efficient cores and settings. I us developer types would love a computer that survives 24 hours on battery.
macOS Monterey coming out on the 25th has a new Low Power Mode feature that may do just that. That said, these Macs are incredibly efficient for light use, you may already get 24 hrs of battery life with your workload. Not counting screen off time.
Video playback is accelerated by essentially custom ASIC processing built into the CPU, so it's one of the most efficient things you can do now. Most development workloads are far more compute intensive.
I get about 14-16 hours out of my M1 MacBook Air doing basically full-time development (browser, mail client, Slack, text editor & terminal open, and compiling code periodically).
I know everyone's use case is different, but most of my development workload is 65% typing code into a text editor and 35% running it. I'm not continually pegging the CPU, just intermittently, in which case the existence of low power cores help a lot. The supposed javascript acceleration in the M1 has seemed to really speed up my workloads too.
This is true, but it's not worst case by far. Most video is 24 or 30 fps, so about half the typical 60 hz refresh rate. Still a nice optimization path for video. I'm not sure what effect typing in an editor will have on screen refresh, but if the Electron issue is any indication, it's probably complicated.
the power supply is for charging the battery faster. the new magsafe 3 system can charge with more wattage than usb-c, as per the announcement. usb-c max wattage is 100 watts, which was the previous limiting factor for battery charge.
That's with 2 connectors right? I have a Dell Precision 3760 and the one connector charging mode is limited to around 90W. With two connectors working in tandem (they snap together), it's 180W.
The connectors never get remotely warm .. in fact under max charge rate they're consistently cool to touch, so I've always thought that it could probably be increased a little bit with no negative consequences.
Single connector, the 3.1 spec goes up to 5A at 48V. You need new cables with support for the higher voltages, but your "multiple plugs for more power" laptop is exactly the sort of device it's designed for.
I’ve not seen any manufacturer even announce they were going to make a supported cable yet, let alone seen one that does. I might’ve missed it though. This will only make the hell of USB-C cabling worse imho.
The USB Implementers Forum announced a new set of cable markings for USB 4 certified cables that will combine information on the maximum supported data rate and maximum power delivery for a given cable.
The 16” has a 100wh battery, so it needs 100w of power to charge 50% in 30 minutes (their “fast charging”). Add in 20w to keep the laptop running at the same time, and some conversion losses, and a 140w charger sounds just about right.
Sure, but it's an Apple cable plugging into an Apple socket. They don't have to be constrained by the USB-C specs and could implement a custom high power charging mode. In fact I believe some other laptop manufacturers already do this.
I’m not particularly surprised. They have little to prove with the iPhone, but have every reason to make every measurable factor of these new Macs better than both the previous iteration and the competition. Throwing in a future-model-upsell is negligible compared to mixed reviews about Magsafe 3 cluttering up reviews they otherwise expect to be positive.
Just in case people missed it - the magsafe cable connects to the power supply via usb-c. So (in theory) there's nothing special about the charger that you couldn't do with a 3rd party charger, or a multiport charger or something like that.
MagSafe was a gimmick for me - disconnects far too easy, cables fray in like 9 months, only one side, proprietary and overpriced. Use longer cables and they will never be yanked again. MBP is heavy enough that even USB-C is getting pulled out on a good yank.
I briefly had an M1 Macbook Air and the thing I hated the most about it was the lack of Magsafe. I returned it (needed more RAM) and was overjoyed they brought Magsafe back with these and am looking forward to having it on my new 16"
You can also still charge through USB C if you don't care for Magsafe.
Might be a power limitation. I have an XPS 17 which only runs at full performance and charges the battery with the supplied 130W charger. USB C is only specced to 100W. I can still do most things on the spare USB C charger I have.
I have a top-spec 15” MBP that was the last release just before 16”. It has 100W supply and it’s easy to have total draw more than that (so pulling from the battery while plugged in) while running heavy things like 3D games. I’ve seen around 140W peak. So a 150W supply seems prudent.
In the power/performance curves provided by Apple, they imply that the Pro/Max provides the same level of performance at a slightly lower power consumption than the original M1.
But at the same time, Apple isn't providing any hard data or explaining their methodology. I dunno how much we should be reading into the graphs. /shrug
Yes, but only at the very extreme. It's normal that a high core count part at low clocks has higher efficiency (perf/power) at a given performance level than a low core count part at high clocks, since power grows super-linearly with clock speed (decreasing efficiency). But notably they've tuned the clock/power regime of the M1 Pro/Max CPUs that the crossover region here is very small.
I think this is pretty easy to math: M1 has 2x the efficiency cores of these new models. Those cores do a lot of work in measured workloads that will sometimes be scheduled on performance cores instead. The relative performance and efficiency lines up pretty well if you assume that a given benchmark is utilizing all cores.
> M1 Pro delivers up to 1.7x more CPU performance at the same power level and achieves the PC chip’s peak performance using up to 70 percent less power
> I'm still a bit sad that the era of "general purpose computing" where CPU can do all workloads is coming to an end.
You'd have to be extremely old to remember that era. Lots of stuff important to making computers work got split off into separate chips away from the CPU pretty early into mass computing, such as sound, graphics, and networking. We've also been sending a lot of compute from the CPU into the GPU as late for both graphics and ML purposes.
Lately it seems like the trend has been taking these specialized peripheral chips and moving them back into SoC packages. Apple's approach here seems to be an evolutionary step on top of say, an Intel chip with integrated graphics, rather than a revolutionary step away from the era of general purpose computing.
The IBM PC that debuted with the 286 was the PC/AT ("Advanced Technology", hah) that is best known for introducing the AT bus later called the ISA bus that led to the proliferation of video cards, sound cards, and other expansion cards that made the PC what it is today.
I'm actually not sure there ever was a "true CPU computer age" where all processing was CPU-bound/CPU-based. Even the deservedly beloved MOS 6502 processor that powered everything for a hot decade or so was considered merely a "micro-controller" rather than a "micro-processor" and nearly every use of the MOS 6502 involved a lot of machine-specific video chips, memory management chips. The NES design lasted so long in part because toward then end cartridges would sometimes have entirely custom processing chips pulling work off the MOS 6502.
Even the mainframe era term itself "Central Processing Unit" has always sort of implied it always works in tandem with other "processing units", it's just the most central. (In some mainframe designs I think this was even quite literal in floorplan.) Of course too, when your CPU is a massive tower full of boards that make up individual operations and very the opposite of an Integrated Circuit, it's quite tough to call those a "general purpose CPU" as we imagine them today.
The C64 mini runs on an ARM processor, so that doesn't count in this context. Also I just learned that the processor in the C64 had two coprocessors for sound and graphics (?). So maybe that also doesn't count.
400GB/s available to the CPU cores in a unified memory, that is going to really help certain workloads that are very memory dominant on modern architectures. Both Intel and AMD are solving this with ever increasing L3 cache sizes but just using attached memory in a SOC has vastly higher memory bandwidth potential and probably better latency too especially on work that doesn't fit in ~32MB of L3 cache.
The M1 still uses DDR memory at the end of the day, it's just physically closer to the core. This is in contrast to L3 which is actual SRAM on the core.
The DDR being closer to the core may or may not allow the memory to run at higher speeds due to better signal integrity, but you can purchase DDR4-5333 today whereas the M1 uses 4266.
The real advantage is the M1 Max uses 8 channels, which is impressive considering that's as many as an AMD EPYC, but operates at like twice the speed at the same time.
Just to underscore this, memory physically closer to the cores has improved tRAS times measured in nanoseconds. This has the secondary effect of boosting the performance of the last-level cache since it can fill lines on a cache miss much faster.
The step up from DDR4 to DDR5 will help fill cache misses that are predictable, but everybody uses a prefetcher already, the net effect of DDR5 is mostly just better efficiency.
The change Apple is making, moving the memory closer to the cores, improves unpredicted cache misses. That's significant.
> Just to underscore this, memory physically closer to the cores has improved tRAS times measured in nanoseconds.
I doubt that tRAS timing is affected by how close / far a DRAM chip is from the core. Its just a RAS command after all: transfer data from DRAM to the sense-amplifiers.
If tRAS has improved, I'd be curious how it was done. Its one of those values that's basically been constant (on a nanosecond basis) for 20 years.
Most DDR3 / DDR4 improvements have been about breaking up the chip into more-and-more groups, so that Group#1 can be issued a RAS command, then Group#2 can be issued a separate RAS command. This doesn't lower latency, it just allows the memory subsystem to parallelize the requests (increasing bandwidth but not improving the actual command latency specifically).
The physically shorter wiring is doing basically nothing. That's not where any of the latency bottlenecks are for RAM. If it was physically on-die, like HBM, that'd be maybe different. But we're still talking regular LPDDR5 using off the shelf dram modules. The shorter wiring would potentially improve signal quality, but ground shields do that, too. And Apple isn't exceeding any specs on this (ie, it's not overclocked), so above average signal integrity isn't translating into any performance gains anyway.
Apple also uses massive cache sizes, compared to the industry.
They put a 32 megabyte system level cache in their latest phone chip.
>at 32MB, the new A15 dwarfs the competition’s implementations, such as the 3MB SLC on the Snapdragon 888 or the estimated 6-8MB SLC on the Exynos 2100
> Apple also uses massive cache sizes, compared to the industry.
AMD's upcoming Ryzen are supposed to have 192MB L3 "v-cache" SRAM stacked above each chiplet. Current chiplets are 8-core. I'm not sure if this is a single chiplet but supposedly good for 2Tbps[1].
Slightly bigger chip than a iphone chip yes. :) But also wow a lot of cache. Having it stacked above rather than built in to the core is another game-changing move, since a) your core has more space b) you can 3D stack many layers of cache atop.
This has already been used on their GPUs, where the 6800 & 6900 have 128MB of L3 "Infinity cache" providing 1.66TBps. It's also largely how these cards get by with "only" 512GBps worth of GDDR6 feeding them (256bit/quad-channel... at 16GT). AMD's R9 Fury from spring 2015 had 1TBps of HBM2, for compare, albeit via that slow 4096bit wide interface.
Anyhow, I'm also in awe of the speed wins Apple got here from bringing RAM in close. Cache is a huge huge help. Plus 400GBps main memory is truly awesome, and it's neat that either the CPU or GPU can make use of it.
> The M1 still uses DDR memory at the end of the day, it's just physically closer to the core. This is in contrast to L3 which is actual SRAM on the core.
But they're probably using 8-channels of LPDDR5, if this 400GB/s number is to be believed. Which is far more memory channels / bandwidth than any normal chip released so far, EPYC and Skylake-server included.
It's more comparable to the sort of memory bus you'd typically see on a GPU... which is exactly what you'd hope for on a system with high-end integrated graphics. :)
You'd expect HBM or GDDR6 to be used. But this is seemingly LPDDR5 that's being used.
So its still quite unusual. Its like Apple decided to take commodity phone-RAM and just make many parallel channels of it... rather than using high-speed RAM to begin with.
HBM is specifically designed to be soldered near a CPU/GPU as well. For them to be soldering commodity LPDDR6 is kinda weird to me.
---------
We know it isn't HBM because HBM is 1024-bits at lower clock speeds. Apple is saying they have 512-bits across 8 channels (64-bits per channel), which is near LPDDR5 / DDR kind of numbers.
200GBps is within the realm of 1x HBM channel (1024-bit at low clock speeds), and 400GBps is 2x HBM channels (2048-bit bus at low clock speeds).
> The DDR being closer to the core may or may not allow the memory to run at higher speeds due to better signal integrity, but you can purchase DDR4-5333 today whereas the M1 uses 4266.
My understanding is that bringing the RAM closer increases the bandwidth (better latency and larger buses), not necessarily the speed of the RAM dies. Also, if I am not mistaken, the RAM in the new M1s is LP-DDR5 (I read that, but it did not stay long on screen so I could be mistaken). Not sure how it is comparable with DDR4 DIMMs.
The overall bandwidth isn't affected much by the distance alone. Latency, yes, in the sense that the signal literally has to travel further, but that difference is miniscule (like 1/10th of a nanosecond) compared to overall DDR access latencies.
Better signal integrity could allow for larger busses, but I don't think this is actually a single 512 bit bus. I think it's multiple channels of smaller busses (32 or 64 bit). There's a big difference from an electrical design perspective (byte lane skew requirements are harder to meet when you have 64 of them). That said, I think multiple channels is better anyway.
The original M1 used LPDDR4 but I think the new ones use some form of DDR5.
Your comment got me thinking, and I checked the math. It turns out that light takes ~0.2 ns to travel 2 inches. But the speed of signal propagation in copper is ~0.6 c, so that takes it up to 0.3 ns. So, still pretty small compared to the overall latencies (~13-18 ns for DDR5) but it's not negligible.
I do wonder if there are nonlinearities that come in to play when it comes to these bottlenecks. Yes, by moving the RAM closer it's only reducing the latency by 0.2 ns. But, it's also taking 1/3rd of the time that it used to, and maybe they can use that extra time to do 2 or 3 transactions instead. Latency and bandwidth are inversely related, after all!
Well, you can have high bandwidth and poor latency at the same time -- think ultra wide band radio burst from Earth to Mars -- but yeah, on a CPU with all the crazy co-optimized cache hierarchies and latency hiding it's difficult to see how changing one part of the system changes the whole. For instance, if you switched 16GB of DRAM for 4GB of SRAM, you could probably cut down the cache-miss latency a lot -- but do you care? If you cache hit rate is high enough, probably not. Then again, maybe chopping the worst case lets you move allocation away from L3 and L2 and into L1, which gets you a win again.
I suspect the only people who really know are the CPU manufacturer teams that run PIN/dynamorio traces against models -- and I also suspect that they are NDA'd through this life and the next and the only way we will ever know about the tradeoffs are when we see them pop up in actual designs years down the road.
DRAM latencies are pretty heinous. It makes me wonder if the memory industry will go through a similar transition to the storage industry's HDD->SSD sometime in the not too distant future.
I wonder about the practicalities of going to SRAM for main memory. I doubt silicon real estate would be the limiting factor (1T1C to 6T, isn't it?) and Apple charges a king's ransom for RAM anyway. Power might be a problem though. Does anyone have figures for SRAM power consumption on modern processes?
>> I wonder about the practicalities of going to SRAM for main memory. I doubt silicon real estate would be the limiting factor (1T1C to 6T, isn't it?) and Apple charges a king's ransom for RAM anyway. Power might be a problem though. Does anyone have figures for SRAM power consumption on modern processes?
I've been wondering about this for years. Assuming the difference is similar to the old days, I'd take 2-4GB of SRAM over 32GB of DRAM any day. Last time this came up people claimed SRAM power consumption would be prohibitive, but I have a hard time seeing that given these 50B transistor chips running at several GHz. Most of the transistors in an SRAM are not switching, so they should be optimized for leakage and they'd still be way faster than DRAM.
> The overall bandwidth isn't affected much by the distance alone.
Testing showed that the M1's performance cores had a surprising amount of memory bandwidth.
>One aspect we’ve never really had the opportunity to test is exactly how good Apple’s cores are in terms of memory bandwidth. Inside of the M1, the results are ground-breaking: A single Firestorm achieves memory reads up to around 58GB/s, with memory writes coming in at 33-36GB/s. Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before.
It just said that bandwidth between a performance core and memory controller is great. It's not related to distance between memory controller and DRAM.
As far as I'm aware, IBM is one of the few chip-designers who have eDRAM capabilities.
IBM has eDRAM on a number of chips in varying capacities, but... its difficult for me to think of Intel, AMD, Apple, ARM, or other chips that have eDRAM of any kind.
Intel had one: the eDRAM "Crystalwell" chip, but that is seemingly a one-off and never attempted again. Even then, this was a 2nd die that was "glued" onto the main chip, and not like IBM's truly eDRAM (embedded into the same process).
You're right. My bad. It's much less common than I'd thought.
(Intel had it on a number of chips that included the Iron Pro Graphics across Haswell, Broadwell, Skylake etc)
Crystalwell was the codename for the eDRAM that was grafted onto Broadwell. (EDIT: Apparently Haswell, but... yeah. Crystalwell + Haswell for eDRAM goodness)
Good point. Especially since a lot of software these days is not all that cache friendly. Realistically this means we have 2 years or so till further abstractions eat up the performance gains.
I thought the memory was one of the more interesting bits here.
My 2-year-old Intel MBP has 64 GB, and 8 GB of additional memory on the GPU. True, on the M1 Max you don't have to copy back and forth between CPU and GPU thanks to integrated memory, but the new MBP still has less total memory than my 2-year-old Intel MBP.
And it seems they just barely managed to get to 64 GiB. The whole processor chip is surrounded by memory chips. That's in part why I'm curious to see how they'll scale this. One idea would be to just have several M1 Max SoCs on a board, but that's going to be interesting to program. And getting to 1 TB of memory seems infeasible too.
Just some genuine honest curiosity here; how many workloads actually require 64gb of ram? For instance, I'm an amateur in the music production scene, and I know that sampling heavy work
flows benefit from being able to load more audio clips fully into RAM rather than streaming them from disk. But 64g seems a tad overkill even for that.
I guess for me I would prefer an emphasis on speed/bandwidth rather than size, but I'm also aware there are workloads that I'm completely ignorant of.
Same, I tend to get everything in 32GB but more and more often I'm going over that and having things slow down. I've also nuked an SSD in a 16GB MBP due to incredibly high swap activity. It would make no sense for me to buy another 32GB machine if I want it to last five years.
Another anecdote from someone who is also in the music production scene - 32GB tended to be the "sweet spot" in my personal case for the longest time, but I'm finding myself hitting the limits more and more as I continue to add more orchestral tracks which span well over 100 tracks total in my workflows.
I'm finding I need to commit and print a lot of these. Logic's little checker in the upper right showing RAM, Disk IO, CPU, etc also show that it is getting close to memory limits on certain instruments with many layers.
So as someone who would be willing to dump $4k into a laptop where its main workload is only audio production, I would feel much safer going with 64GB knowing there's no real upgrade if I were to go with the 32GB model outside of buying a totally new machine.
Edit: And yes, there is does show the typical "fear of committing" issue that plagues all of us people making music. It's more of a "nice to have" than a necessity, but I would still consider it a wise investment. At least in my eyes. Everyone's workflow varies and others have different opinions on the matter.
I know the main reason why the Mac Pro has options for LRDIMMs for terabytes of RAM is specifically for audio production, where people are basically using their system memory as cache for their entire instrument library.
I have to wonder how Apple plans to replace the Mac Pro - the whole benefit of M1 is that gluing the memory to the chip (in a user-hostile way) provides significant performance benefits; but I don't see Apple actually engineering a 1TB+ RAM SKU or an Apple Silicon machine with socketed DRAM channels anytime soon.
I think we'd probably see apple use the fast and slow ram method that old computers used back in the 90's.
16-32GB of RAM on the SOC, with DRAM sockets for usage past the built in amount.
Though by the time we see an ARM MacPro they might move to stacked DRAM on the SOC. But i'd really think two tier memory system would be apple's method of choice.
I'd also expect a dual SOC setup.
So I don't expect to see that anytime soon.
I'd love to get my hands on a Mac Mini with the M1 Max.
I went for 64GB. I have one game where 32GB is on the ragged edge - so for the difference it just wasn't worth haggling over. Plus it doubled the memory bandwidth - nice bonus.
And unused RAM isn't wasted - the system will use it for caching. Frankly I see memory as one of the cheapest performance variables you can tweak in any system.
> how many workloads actually require 64gb of ram?
Don't worry, Chrome will eat that up in no time!
More seriously, I look forward to more RAM for some of the datasets I work with. At least so I don't have to close everything else while running those workloads.
As a data scientist, I sometimes find myself going over 64 GB. Of course it all depends on how large data I'm working on. 128 GB RAM helps even with data of "just" 10-15 GB, since I can write quick exploratory transformation pipelines without having to think about keeping the number of copies down.
I could of course chop up the workload earlier, or use samples more often. Still, while not strictly necessary, I regularly find I get stuff done quicker and with less effort thanks to it.
Not many, but there are a few that need even more. My team is running SQL servers on their laptops (development and support) and when that is not enough, we go to Threadrippers with 128-256GB of RAM. Other people run Virtual Machines on their computers (I work most of the time in a VM) and you can run several VMs at the same time, eating up RAM really fast.
On a desktop Hackintosh, I started with 32GB that would die with out of memory errors when I was processing 16bit RAW images at full resolution. Because it was Hackintosh, I was able to upgrade to 64GB so the processing could complete. That was the only thing running.
What image dimensions? What app? I find this extremely suspect, but it’s plausible if you’ve way undersold what you’re doing. 24Mpixel 16bit RAW image would have no problem generally on an 4gb machine if it’s truly the only app running and the app isn’t shit. ;)
I shoot timelapse using Canon 5D RAW images, I don't know the exact dimensions off the top of my head but greater than 5000px wide. I then grade them using various programs, ultimately using After Effects to render out full frame ProRes 4444. After Effects was running out of memory. It would crash and fail to render my file. It would display an error message that told me specifically it was out of memory. I increased the memory available to the system. The error goes away.
But I love the fact that you have this cute little theory to doubt my actual experience to infer that I would make this up.
> But I love the fact that you have this cute little theory to doubt my actual experience to infer that I would make this up.
The facts were suspect, your follow up is further proof I had good reason to be suspect. First off, the RAW images from a 5D aren’t 16 bit. ;) Importantly, the out of memory error had nothing to do with the “16 bit RAW files”, it was video rendering lots of high res images that was the issue which is a very different issue and of course lots of RAM is needed there. Anyway, notice I said “but it’s plausible if you’ve way undersold what you’re doing”, which is definitely the case here, so I’m not sure why it bothered you.
>> die with out of memory errors when I was processing 16bit RAW images
> Canon RAW images are 14bit
You don’t see the issue?
> Are you just trying to be argumentative for the fun?
In the beginning, I very politely asked a clarifying question making sure not to call you a liar as I was sure there was more to the story. You’re the one who’s been defensive and combative since, and honestly misrepresenting facts the entire time. Where you wrong at any point? Only slightly, but you left out so many details that were actually important to the story for anyone to get any value out of your anecdata. Thanks to my persistence, anyone who wanted to learn from your experience now can.
>> I was processing 16bit RAW images at full resolution.
>> ...using After Effects to render out full frame ProRes 4444.
Those are two different applications to most of us. No one is accusing you of making things up, just that the first post wasn't fully descriptive of your use case.
Working with video will use up an extraordinary amount of memory.
Some of the genetics stuff I work on requires absolute gobs of RAM. I have a single process that requires around 400GB of RAM that I need to run quite regularly.
It’s a slight exaggeration, I also have an editor open and some dev process (test runner usually). It’s not just caching, I routinely hit >30 GB swap with fans revved to the max and fairly often this becomes unstable enough to require a reboot even after manually closing as much as I can.
I mean, some of this comes down to poor executive function on my part, failing to manage resources I’m no longer using. But that’s also a valid use case for me and I’m much more effective at whatever I’m doing if I can defer it with a larger memory capacity.
Since applications have virtual memory, it sort of doesn’t matter? The OS will map these to actual pages based on how many processes are available, etc. So if only one app runs and it wants lots of memory, it makes sense to give it lots of memory - that is the most “economical” decision from both a energy and performance POV.
So, M1 has been out for a while now, with HN doom and gloom about not being able to put enough memory into them. Real world usage has demonstrated far less memory usage than people expected (I don't know why, maybe someone paid attention and can say). The result is that 32G is a LOT of memory for an M1-based laptop, and 64G is only needed for very specific workloads I would expect.
Measuring memory usage is a complicated topic and just adding numbers up overestimates it pretty badly. The different priorities of memory are something like 1. wired (must be in RAM) 2. dirty (can be swapped) 3. purgeable (can be deleted and recomputed) 4. file backed dirty (can be written to disk) 4. file backed clean (can be read back in).
Also note M1's unified memory model is actually worse for memory use not better. Details left as an exercise for the reader.
Unified memory is a performance/utilisation tradeoff. I think the thing is it's more of an issue with lower memory specs. The fact you don't have 4GB (or even 2 GB) dedicated memory on a graphics card in a machine with 8GB of main memory is a much bigger deal than not having 8GB on the graphics card on a machine with 64 GB of main RAM.
Or like games, even semi-casual ones. Civ6 would not load at all on my mac mini. Also had to fairly frequently close browser windows as I ran out of memory.
I couldn't load Civ6 until I verified game files in Steam, and now it works pretty perfectly. I'm on 8GB and always have Chrome, Apple Music and OmniFocus running alongside.
I'm interested to see how the GPU on these performs, I pretty much disable the dGPU on my i9 MBP because it bogs my machine down. So for me it's essentially the same amount of memory.
From the perspective of your GPU, that 64GB of main memory attached to your CPU is almost as slow to fetch from as if it were memory on a separate NUMA node, or even pages swapped to an NVMe disk. It may as well not be considered "memory" at all. It's effectively a secondary storage tier.
Which means that you can't really do "GPU things" (e.g. working with hugely detailed models where it's the model itself, not the textures, that take up the space) as if you had 64GB of memory. You can maybe break apart the problem, but maybe not; it all depends on the workload. (For example, you can't really run a Tensorflow model on a GPU with less memory than the model size. Making it work would be like trying to distribute a graph-database routing query across nodes — constant back-and-forth that multiplies the runtime exponentially. Even though each step is parallelizable, on the whole it's the opposite of an embarrassingly-parallel problem.)
>The SoC has access to 16GB of unified memory. This uses 4266 MT/s LPDDR4X SDRAM (synchronous DRAM) and is mounted with the SoC using a system-in-package (SiP) design. A SoC is built from a single semiconductor die whereas a SiP connects two or more semiconductor dies.
SDRAM operations are synchronised to the SoC processing clock speed. Apple describes the SDRAM as a single pool of high-bandwidth, low-latency memory, allowing apps to share data between the CPU, GPU, and Neural Engine efficiently.
In other words, this memory is shared between the three different compute engines and their cores. The three don't have their own individual memory resources, which would need data moved into them. This would happen when, for example, an app executing in the CPU needs graphics processing – meaning the GPU swings into action, using data in its memory. https://www.theregister.com/2020/11/19/apple_m1_high_bandwid...
I know; I was talking about the computer the person I was replying to already owns.
The GP said that they already essentially have 64GB+8GB of memory in their Intel MBP; but they don't, because it's not unified, and so the GPU can't access the 64GB. So they can only load 8GB-wide models.
Whereas with the M1 Pro/Max the GPU can access the 64GB, and so can load 64GB-wide models.
How much of that 64 GB is in use at the same time though? Caching not recently used stuff from DRAM out to an SSD isn't actually that slow, especially with the high speed SSD that Apple uses.
Right. And to me, this is the interesting part. There's always been that size/speed tradeoff ... by putting huge amounts of memory bandwidth on "less" main RAM, it becomes almost half-ram-half-cache; and by making the SSD fast it becomes more like massive big half-hd-half-cache. It does wear them out, however.
You were (unintentionally) trolled. My first post up there was alluding to the legend that Bill Gates once said, speaking of the original IBM PC, "640K of memory should be enough for anybody." (N.B. He didn't[0])
Video and VFX generally don't need to keep whole sequences in RAM persistently these days because:
1. The high-end SSDs in all Macs can keep up with that data rate (3GB/sec)
2. Real-time video work is virtually always performed on compressed (even losslessly compressed) streams, so the data rate to stream is less than that.
But it's also been around for at least a year. And upcoming pcie 5 SSDs will up that to 10-14GBps.
I'm saying Apple might have wanted to emphasise their more standout achievements. Such as on the CPU front, where they're likely to be well ahead for a year - competition won't catch up until AMD starts shipping 5nm Zen4 CPUs in Q3/Q4 2022.
I'm guessing that's new for the 13" or for the M1, but my 16‑inch MacBook Pro purchased last year had 64GB of memory. (Looks like it's considered a 2019 model, despite being purchased in September 2020).
Really curious if the memory bandwidth is entirely available to the CPU if the GPU is idle. An nvidia RTX3090 has nearly 1TB/s bandwidth, so the GPU is clearly going to use as much of the 400GB/s as possible. Other unified architectures have multiple channels or synchronization to memory, such that no one part of the system can access the full bandwidth. But if the CPU can access all 400GB/s, that is an absolute game changer for anything memory bound. Like 10x faster than an i9 I think?
Not sure if it will be available, but 400GB/s is way too much for 8 cores to take up. You would need some sort of avx512 to hog up that much bandwidth.
Moreover, it's not clear how much of a bandwidth/width does M1 max CPU interconnect/bus provide.
--------
Edit: Add common sense about HPC workloads.
There is a fundamental idea called memory-access-to-computation ratio. We can't assume a 1:0 ratio since it was doing literally nothing except copying.
Typically your program needs serious fixing if it can't achieve 1:4. (This figure comes from a CUDA course. But I think it should be similar for SIMD)
Edit: also a lot of that bandwidth is fed through cache. Locality will eliminate some orders of magnitudes of memory access, depending on the code.
> Not sure if it will be available, but 400GB/s is way too much for 8 cores to take up. You would need some sort of avx512 to hog up that much bandwidth.
If we assume that frequency is 3.2Ghz and IPC of 3 with well optimized code(which is conservative for performance cores since they are extremely wide) and count only performance cores we get 5 bytes for instruction. M1 supports 128-bit Arm Neon, so peak bandwidth usage per instruction(if I didn't miss anything) is 32 bytes.
Don't know the clock speed but 8 cores at 3Ghz working on 128bit SIMD is 8316 = 384GB/s so we are in the right ball park. Not that I personally have a use for that =) Oh, wait, bloody Java GC might be a use for that. (LOL, FML or both).
But the classic SIMD problem is matrix-multiplication, which doesn't need full memory bandwidth (because a lot of the calculations are happening inside of cache).
The question is: what kind of problems are people needing that want 400GB/s bandwidth on a CPU? Well, probably none frankly. The bandwidth is for the iGPU really.
The CPU just "might as well" have it, since its a system-on-a-chip. CPUs usually don't care too much about main-memory bandwidth, because its like 50ns+ away latency (or ~200 clock ticks). So to get a CPU going in any typical capacity, you'll basically want to operate out of L1 / L2 cache.
> Oh, wait, bloody Java GC might be a use for that. (LOL, FML or both).
For example, I know you meant the GC as a joke. But if you think of it, a GC is mostly following pointer->next kind of operations, which means its mostly latency bound, not bandwidth bound. It doesn't matter that you can read 400GB/s, your CPU is going to read an 8-byte pointer, wait 50-nanoseconds for the RAM to respond, get the new value, and then read a new 8-byte pointer.
Unless you can fix memory latency (and hint, no one seems to be able to do so), you'll be only able to hit 160MB/s or so, no matter how high your theoretical bandwidth is, you get latency locked at a much lower value.
Yeah the marking phase cannot be efficiently vectorized. But I wonder if it can help with compacting/copying phase.
Also for me the process sounds oddly familiar to vmem table walking. There is currently a RISC-V J extension drafting group. I wonder what they can come up with.
But they are demonstrating with 16 cores + 30 GB/s & 128 cores + 190 GB/s. And to my understanding they did not really mention what type of computational load did they perform. So this does not sound too ridiculous. M1 max is pairing 8 cores + 400GB/s.
How do you prefetch "node->next" where "node" is in a linked list?
Answer: you literally can't. And that's why this kind of coding style will forever be latency bound.
EDIT: Prefetching works when the address can be predicted ahead of time. For example, when your CPU-core is reading "array", then "array+8", then "array+16", you can be pretty damn sure the next thing it wants to read is "array+24", so you prefetch that. There's no need to wait for the CPU to actually issue the command for "array+24", you fetch it even before the code executes.
Now if you have "0x8009230", which points to "0x81105534", which points to "0x92FB220", good luck prefetching that sequence.
--------
Which is why servers use SMT / hyperthreading, so that the core can "switch" to another thread while waiting those 50-nanoseconds / 200-cycles or so.
I don't really know how the implementation of a tracing GC works but I was thinking they could do some smart memory ordering to land in the same cache-line as often as possible.
But that’s just the marking phase, isn’t it? And most of it can be done fully in parallel, so while not all CPU cores can be maxed out with that, more often than not the original problem itself can be hard to parallelize to that level, so “wasting” a single core may very well be worth it.
I always like pointing out Knuth's dancing links algorithm for Exact-covering problems. All "links" in that algorithm are of the form "1 -> 2 -> 3 -> 4 -> 5" at algorithm start.
Then, as the algorithm "guesses" particular coverings, it turns into "1->3->4->5", or "1->4", that is, always monotonically increasing.
As such, no dynamic memory is needed ever. The linked-list is "statically" allocated at the start of the program, and always traversed in memory order.
Indeed, Knuth designed the scheme as "imagine doing malloc/free" to remove each link, but then later, "free/malloc" to undo the previous steps (because in Exact-covering backtracking, you'll try something, realize its a dead end, and need to backtrack). Instead of a malloc followed up by a later free, you "just" drop the node out of the linked list, and later reinsert it. So the malloc/free is completely redundant.
In particular: a given "guess" into an exact-covering problem can only "undo" its backtracking to the full problem scope. From there, each "guess" only removes possibilities. So you use the "maximum" amount of memory at program start, you "free" (but not really) nodes each time you try a guess, and then you "reinsert" those nodes to backtrack to the original scope of the problem.
Finally, when you realize that, you might as well put them all into order for not only simplicity, but also for speed on modern computers (prefetching and all that jazz).
Its a very specific situation but... it does happen sometimes.
AMD showed with their Infinity Cache that you can get away with much less bandwidth if you have large caches. It has the side effect of radically reducing power consumption.
Apple put 32MB of cache in their latest iPhone. 128 or even 256MB of L3 cache wouldn't surprise me at all given the power benefits.
Apple put ProMotion in the built in display, so while it can ramp up to 120Hz, it'll idle at more like 24 Hz when showing static content. (the iPad Pro goes all the way down to 10Hz, but some early sources seem to say 24Hz for these MacBook Pros.) There may also be panel self refresh involved, in which case a static image won't even need that much. I bet the display coprocessors will expose the adaptive refresh functionality over the external display connectors as well.
It only takes a tiny animation (eg. a spinner, pulsing glowing background, animated clock, advert somewhere, etc), and suddenly the whole screen is back to 120 Hz refresh.
Don't know much about the graphics on an M1. Does it not render to a framebuffer? Is that framebuffer spread over all 4 memory banks? Can't wait to read all about it.
The updates from the Asahi Linux team are fantastic for getting insights into the M1 architecture. They've not really dug deep into the GPU yet, but that's coming soon.
When nothing is changing, you do not have to touch the GPU. Yes, without Panel Self Refresh there would be this many bits going to the panel at that rate, but the display engine would keep resubmitting the same buffer. No need to rerender when there's no damage. (And when there is, you don't have to rerender the whole screen, only the combined damage of the previous and current frames.)
More memory bandwith = 10x faster than an i9 ? this makes no sense to me doesn't clock speed and cores determine the major part of the performance of a cpu ?
Yes and no. There are many variables to take into account. An example from the early days of the PPC architecture was their ability to pre-empt instructions. This gave performance boasts even in the absence of a higher clock speed. I can't speak specifically on the M1, but there are other things outside of clock speed and cores that determine speed.
Yes, but it's a double edged sword. It means you're using relatively slow ram for the GPU, and that the GPU takes memory bandwidth away from the CPU as well. Traditionally we've ended up with something that looks like Intel's kinda crappy integrated video.
The copying process was never that much of a big deal, but paying for 8GB of graphics ram really is.
> The copying process was never that much of a big deal
I don't know about that? Texture memory management in games can be quite painful. You have to consider different hardware setups and being able to keep the textures you need for a certain scene in memory (or not, in which case, texture thrashing).
The copying process was quite a barrier to using compute (general purpose GPU) to augment CPU processing and you had to ensure that the work farmed to the GPU was worth the cost of the to/from costs. Game consoles of late have generally had unified memory (UMA) and it's quite a nice advantage because moving data is a significant bottleneck.
Using Intel's integrated video as a way to assess the benefits of unified memory is off target. Intel had a multitude of design goals for their integrated GPU and UMA was only one aspect so it's not so easy to single that out for any shortcomings that you seem to be alluding to.
If you're looking at the SKU with the high GPU core count and 64 Gigs of LPDDR5, the total memory bandwidth (400 GBps) isn't that far off from the bandwidth a discrete GPU would have to it's local pool of memory.
You also have an (estimated from die shots) 64 megabyte SRAM system level cache and large L2 and L1 CPU caches, but you are indeed sharing the memory bandwidth between the CPU and GPU.
I'm looking forward to these getting into the hands of testers.
> I'm still a bit sad that the era of "general purpose computing" where CPU can do all workloads is coming to an end.
They’ll still do all workloads, but are optimized for certain workloads. How is that any different than say, a Xeon or EPYC cpu designed for highly threaded (server/scientific computing) applications?
In this context the absence of the 27 inch iMac was interesting. If these SoC were not deemed to be 'right' for the bigger iMac then possibly a more CPU focused / developer focused SoC may be in the works for the iMac?
I doubt they are going to make different chips for prosumer devices. They are going to spread out the M1 pro/max upgrade to the rest of the lineup at some point during the next year, so they can claim "full transition" through their quoted 2 years.
The wildcard is the actual mac pro. I suspect we aren't going to hear about mac pro until next Sept/Oct events, and super unclear what direction they are going to go. Maybe allowing config of multiple M1 max SOCs somehow working together. Seems complicated.
On reflection I think they've decided that their pro users want 'more GPU not more CPU' - they could easily have added a couple more CPU cores but it obviously wasn't a priority.
Agreed that it's hard to see how designing a CPU just for the Mac Pro would make any kind of economic sense but equally struggling to see what else they can do!
I think we will see iMac Pro with incredible performance. Mac Pros, maybe in the next years to come. It's a really high end product to release new specs. Plus, in they release it with M1 Max chips, what would be the difference? a nicer case and more upgrading slot? I don't see the advantage of power. I think Mac Pros will be upgraded like in 2 years ahead
They also have a limited headcount and resources so they wouldn't want to announce M1x/pro/max for all machines now and have employees be idle for the next 3 months.
Notebooks also have a higher profit margin, so they sell them to those who need to upgrade now. The lower-margin systems like Mini will come later. And the Mac Pro will either die or come with the next iteration of the chips.
Yup. Once I saw the 24" iMac I knew the 27" had had it's chips. 30" won't actually be much bigger than the 27" if the bezels shrink to almost nothing - which seems to be the trend.
They're not meant to go together like that -- there's not really an interconnect for it, or any pins on the package to enable something like that. Apple would have to design a new Mx SoC with something like that as an explicit design goal.
I think the problem would be how one chip can access the memory of the other one. The big advantage in the M1xxxx is the unified memory. I don't think the chips have any hardware to support cache coherency and so on spanning more than one chip.
You would have to implement single system image abstraction, if you wanted more than a networked cluster of M1s in a box, in the OS using just software plus virtual memory. You'd use the PCIe as the interconnect. Similar has been done by other vendors for server systems, but it has tradeoffs that would probably not make sense to Apple now.
A more realistic question would be what good hw multisocket SMP support would look like in M1 Max or later chips, as that would be a more logical thing to build if Apple wanted this.
The rumor has long been that the future Mac Pro will use 2-4 of these “M1X” dies in a single package. It remains to be seen how the inter-die interconnect will work / where those IOs are on the M1 Pro/Max die.
The way I interpreted it is that it's like lego so they can add more fast cores or more efficiency cores depending on the platform needs. The successor generations will be new lego building blocks.
Not exactly. M1 CPU, GPU, and RAM were all capped in the same package. New ones appear to be more a single board soldered onto mainboard, with a discrete CPU, GPU, and RAM package each capped individually if their "internals" promo video is to be believed (and it usually is an exact representation of the shipping product) https://twitter.com/cullend/status/1450203779148783616?s=20
Suspect this is a great way for them to manage demand and various yields by having 2 CPU's (or one, if the difference between pro/ max is yield on memory bandwidth) and discrete RAM/ GPU components
I know nothing about hardware, basically. Do Apple’s new GPU cores come close to the capabilities of discrete GPUs like what are used for gaming/scientific applications? Or are those cards a whole different thing?
1. If you're a gamer, this seems comparable to a 3070 Laptop, which is comparable to a 3060 Desktop.
2. If you're a ML researcher you use CUDA (which only works on NVIDIA cards), they have basically a complete software lock unless you want to spend an undefined number of X hundreds of hours fixing and troubleshooting compatibility issues.
There has been an M1 fork of Tensorflow almost since the chip launched last year. I believe Apple did the leg work. It’s a hoop to jump through, yes, and no ones training big image models or transformers with this, but I imagine students or someone sandboxing a problem offline would benefit from the increased performance over CPU only.
Not a great fit. Something like Ampere altra is better as it gives you 80 cores and much more memory which better fits a server. A server benefits more from lots of weaker cores than a few strong cores. The M1 is an awesome desktop/laptop chip and possibly great for HPC, but not for servers.
What might be more interesting is to see powerful gaming rigs built around the these chips. They could have build a kickass game console with these chips.
Why they didn't lean into that aspect of the Apple TV still mystifies me. A Wii-mote style pointing device seems such a natural fit for it, and has proven gaming utility. Maybe patents were a problem?
Why? There are plenty of server oriented ARM platforms available for use (See AWS Graviton). What benefit do you feel Apple’s platform gives over existing ones?
The Apple cores are full custom, Apple-only designs.
The AWS Graviton are Neoverse cores, which are pretty good, but clearly these Apple-only M1 cores are above-and-beyond.
---------
That being said: these M1 cores (and Neoverse cores) are missing SMT / Hyperthreading, and a few other features I'd expect in a server product. Servers are fine with the bandwidth/latency tradeoff: more (better) bandwidth but at worse (highter) latencies.
My understanding is that you don't really need hyperthreading on a RISC CPU because decoding instructions is easier and doesn't have to be parallelised as with hyperthreading.
The DEC Alpha had SMT on their processor roadmap, but it was never implemented as their own engineers told the Compaq overlords that they could never compete with Intel.
"The 21464's origins began in the mid-1990s when computer scientist Joel Emer was inspired by Dean Tullsen's research into simultaneous multithreading (SMT) at the University of Washington."
Okay, the whole RISC thing is stupid. But ignoring that aspect of the discussion... POWER9, one of those RISC CPUs, has 8-way SMT. Neoverse E1 also has SMT-2 (aka: 2-way hyperthreading).
SMT / Hyperthreading has nothing to do with RISC / CISC or whatever. Its just a feature some people like or don't like.
RISC CPUs (Neoverse E1 / POWER9) can perfectly do SMT if the designers wanted.
Don’t think that is entirely true. Lots of features which exist on both RISC and CISC CPUs have different natural fit. Using micro-ops e.g. on a CISC is more important than in RISC CPU even if both benefit. Likewise pipelining has a more natural fit on RISC than CISC, while micro-op cache is more important on CISC than RISC.
I don't even know what RISC or CISC means anymore. They're bad, non-descriptive terms. 30 years ago, RISC or CISC meant something, but not anymore.
Today's CPUs are pipelined, out-of-order, speculative, superscalar, (sometimes) SMT, SIMD, multi-core with MESI-based snooping for cohesive caches. These words actually have meaning (and in particular, describe a particular attribute of performance for modern cores).
RISC or CISC? useful for internet flamewars I guess but I've literally never been able to use either term in a technical discussion.
-------
I said what I said earlier: this M1 Pro / M1 Max, and the ARM Neoverse cores, are missing SMT, which seems to come standard on every other server-class CPU (POWER9, Intel Skylake-X, AMD EPYC).
Neoverse N1 makes up for it with absurdly high core counts, so maybe its not a big deal. Apple M1 however has very small core counts, I doubt that Apple M1 would be good in a server setting... at least not with this configuration. They'd have to change things dramatically to compete at the higher end.
POWER9, RISC-V, and ARM all have microcoded instructions. In particular, division, which is very complicated.
As all CPUs have decided that hardware-accelerated division is a good idea (and in particular: microcoded, single-instruction division makes more sense than spending a bunch of L1 cache on a series of instructions that everyone knows is "just division" and/or "modulo"), microcode just makes sense.
The "/" and "%" operators are just expected on any general purpose CPU these days.
30 years ago, RISC processors didn't implement divide or modulo. Today, all processors, even the "RISC" ones, implement it.
It's slightly more general than that, hiding inefficient use of functional units. A lot of times that's totally memory latency causing the inability to keep FUs fed like you say, but i've seen other reasons, like a wide but diverse set of FUs that have trouble applying to every workload.
The classic reason quoted for SMT is to allow the functional units to be fully utilised when there is instruction-to-instruction dependencies - that is, the input of one instruction is the output from the previous instruction. Doing SMT allow you to create one large pool of functional units and share them between multiple threads, hopefully increasing the chances that they will be fully used.
Well, tons, there isn't another ARM core that can match a single M1 Firestorm, core to core. Heck, only the highest performance x86 cores can match a Firestorm core. and that's just raw performance, not even considering power efficiency. But of course, Apple's not sharing.
They were, but have stopped talking about that for years. The project is probably canceled; I've heard Jim Keller talk about how that work was happening simultaneously with Zen 1.
> Today, Apple is carbon neutral for global corporate operations, and by 2030, plans to have net-zero climate impact across the entire business, which includes manufacturing supply chains and all product life cycles. This also means that every chip Apple creates, from design to manufacturing, will be 100 percent carbon neutral.
But what they won't do is put the chip in an expandable and repairable system so that you don't have to discard and replace it every few years. This renders the carbon-neutrality of the chips meaningless. It's not the chip, it's the packaging that is massively unfriendly to the environment, stupid.
Apple, the company that requires the entire panel to be replaced by design when a 6 dollar display cable malfunctions, is proud to announce its latest marketing slogan for a better environment.
Just because you're not getting that panel back doesn't mean it's destroyed and wasted. I figure that these policies simplify their front-line technician jobs, getting faster turnaround times and higher success rates. Then they have a different department that sorts through all the removed/broken parts, repairing and using parts from them. No idea if this is what they actually do, but it would be the smart way to handle it.
It's possible for both companies to be in the wrong.
The recycling center shouldn't have resold the devices (which is, as you point out, effectively theft). However, Apple should not be shredding hundreds of thousands of otherwise usable devices.
Apple does nothing to improve front line technician procedures. They aren't even an engineering factor. If you happen to be able to replace something on an Apple product, it's only because the cost-benefit ratio wasn't in favor of making that part hostile to work with.
Apple puts 56 screws in the Unibody MBP keyboards. They were practically the pioneer of gluing components in permanently. They don't care about technicians. Not even their own. They have been one of the leaders of the anti-right-to-repair movement from day one.
Oh hey they went back to screws on the keyboards? that's nice, they used to be single use plastic rivets, so at least you can redo that.
Also Apple's glue isn't usually that bad to work with. Doesn't leave much residue, so as long as you know where to apply the heat you can do a clean repair and glue the new component back in.
> Apple does nothing to improve front line technician procedures.
I’m not a fan of planned obselence and waste but this is clearly wrong. They’ve spent loads of engineering effort designing a machine for their store that can replace and reseal and test iPhone screen replacements out back.
So what’s your proposal? How big would a “phone” be with all those features that an iphone pro has? I am by no means an apple fanboy, but the same way a modern car engine can’t just be tweaked the way it was 50 years ago due to all the miniaturizations that are in large part due to efficiency gains, the same is just as true of phones.
But at the same time, a single chip with everything included will also make these phones pretty sturdy, where it either fails completely, or remain working for long years.
An interesting comparison is formula 1 cars. Peak performance and parts that can be changed in seconds while still running. Even average modern cars have hundreds of parts that a lay person can reach with simple tools. Apple are obviously making a trade off (close it down for reduced size and better weather/water sealing) but then they don't get to pretend to be an environmentally concious company as that is antithical to their design goals.
Emissions affect everyone on the planet, no matter where they happen. But polluting the ground or water only happens in China, so a lot of Americans that care about emissions don't care about the other types of pollution, because it doesn't affect them.
You would be surprised just how much food you eat has been grown in China using polluted land and water.
It's not so much fresh vegetables, but ingredients in other types of food -- especially the frozen fruit, vegetables and farmed seafood that finds its way into grocery store and restaurant supply chains.
Not OP. Personally, I've had Dell, HP and Sony laptops. But the macs have been the longest lasting of them all. My personal pro is from 2015.
It has also come to a point where none of the extensions makes sense for me. 512GB is plenty. RAM might be an issue - but I honestly don't have enough data on that. The last time I had more than 16GB RAM was in 2008 on my hand built desktop.
As long as the battery can be replaced/fixed - even if it's not user serviceable, I'm okay with that. I'd guess I'm not in the minority here. Most people buy a computer and then take it to the store even if there's a minor issue. And Apple actually shines here. I have gotten my other laptop serviced - but only in unauthorized locations with questionable spare parts. With Apple, every non-tech savvy person I know has been able to take to an Apple store at some point and thereby extend the life.
That's why I believe having easily accessible service locations does more to device longevity than being user-serviceable.
(In comparison, HTC wanted 4 weeks to fix my phone plus 1wk either way in shipping time and me paying shipping costs in addition to the cost of repair. Of course, I abandoned the phone entirely than paying to fix it.)
We could actually test this hypothesis - if we could ask an electronics recycler on the average age of the devices they get by brand, we should get a clear idea on what brands actually last longer.
I'd much rather have the ability to fix a device myself than be locked into a vendor controlled repair solution. I've been able to extend the life of many devices I've had (the earliest from 2010) through repairs like dust removal, RAM upgrades and thermal paste reapplication.
Also worth noting that some people might be taking laptops to repair shops precisely because they are not user serviceable. Companies like framework are trying to change this with well-labelled internals and easily available parts.
I'm guessing they mean greenwashing statements about lower CO2 emissions glosses over more "traditional" pollution as heavy metals, organic solvents, SO2, NOx. Taming overconsumption is greener than finding ways to marginally reduce per unit emissions on ever more industrial production.
> "AirPods are designed with numerous materials and features to reduce their environmental impact, including the 100 percent recycled rare earth elements used in all magnets. The case also uses 100 percent recycled tin in the solder of the main logic board, and 100 percent recycled aluminum in the hinge. AirPods are also free of potentially harmful substances such as mercury, BFRs, PVC, and beryllium. For energy efficiency, AirPods meet US Department of Energy requirements for battery charger systems. Apple’s Zero Waste program helps suppliers eliminate waste sent to landfills, and all final assembly supplier sites are transitioning to 100 percent renewable energy for Apple production. In the packaging, 100 percent of the virgin wood fiber comes from responsibly managed forests."
Weird that they leave out the parts about the battery and casing waste; and are design to only last on average of 18 months to force you to buy new ones.
No, because, its the limitation of a battery. And there is a reason why, they’re manufactured as a non reparable product. They house battery, speakers, microphone, bluetooth and other electronic logic board. The space is so scarce they need to be machined very accurately. But hey bashing on Apple is better than thinking why.
Market has already spoken that it needs tiny things hanging on your ears. There is a limit on what we can expect from such things.
>so that you don't have to discard and replace it every few years
Except you and I surely must know that's not true, that their machines have industry leading service lifetimes, and correspondingly high resale values as a result. Yes some pro users replace their machines regularly but those machines generally go on to have long productive lifetimes. Many of these models are also designed to be highly recyclable when the end comes. It's just not as simple as you're making out.
Right. Im not speaking to iPhones or iPads here, but the non-serviceability creates a robustness pretty unmatched by Windows laptops in terms of durability.
Was resting my 2010 MBP on the railing of a second story balcony during a film shoot and it dropped onto the marble floor below. Got pretty dented, but all that didn't work was the ethernet port. Got the 2015 one and it was my favorite machine ever - until it got stolen.
2017 one (typing on now) is the worst thing I've ever owned and I'm looking forward to getting one of the new ones. 2017 one:
-Fries any low voltage USB device I plug in (according to some internal Facebook forms they returned 2-5k of this batch for that reason)
-When it fried an external drive plugged in on the right, also blew out the right speaker.
-Every time I try and charge it I get to guess which USB-C port is going to work for charging. If I pick wrong I have to power cycle to power brick (this is super fun when the laptops dead and there's no power indicator, as there is on the revived magsafe)
-Half-dime shaped bit of glass popped out of the bottom of the screen when it was under load - this has happened to others in the same spot but user error..
Pissed Apple wouldn't replace it given how many other users have had the same issues, but this thing has taken a beating as have my past laptops. I'll still give them money if the new one proves to be as good as it seems.
> their machines have industry leading service lifetimes
Please stop copying marketing content, it really doesn't help your argument.
Additionally, macbooks have high failure rates, especially with keyboards in the previous generations, but also overheating because of their dreadful airflow. Time will tell what happens to the M1, but Apple's hardware is just as (un)reliable as say, Dell's.
> Apple's hardware is just as (un)reliable as say, Dell's.
When I had access to reports from IT on a previous job (5k+ employees, most on MacBooks) Apple was definitely much more reliable than the Dell Windows machines in use. More reliable than the ThinkPads as well but this is data from one company, unsure how it compares to other large orgs.
Not only more reliable but customer service was much faster and better with Apple computers than Dell's.
This only makes sense if you presume people throw away their laptops when they replace them after "a few years". Given the incredibly high second hand value of macbooks, I think most people sell them or hand them down.
You're talking about selling working devices but parent was also talking about repairing them.
Seems like a huge waste to throw away a $2000+ machine when it's out of warranty because some $5 part on it dies and Apple not only doesn't provide a spare but actively fights anyone trying to repare them, while the options they realistically will give you out of warranty being having your motherboard replaced for some insane sum like $1299 or having you buy a new laptop.
Or what if you're a klutz and spill your grape juice glass over your keyboard? Congrats, now you're -$2000 lighter since there's no way to take it apart and clean the sticky mess inside.
> Or what if you're a klutz and spill your grape juice glass over your keyboard? Congrats, now you're -$2000 lighter since there's no way to take it apart and clean the sticky mess inside.
Thanks to the Right To Repair, you can take the laptop to pretty much any repair shop and they can replace anything you damaged with OEM or third-party parts. They even have schematics, so they can just desolder and resolder failed chips. In the past, this sort of thing would be a logic board swap for $1000 at the very least, but now it's just $30 + labor.
Oh, there is no right to repair. So I guess give Apple $2000 again and don't drink liquids at work.
Removable ram wouldn't change anything in your story presuming the entire board is fried. Anyway, $30 + labor is a deceitful way to put it. The labor in your story would be 100s an hour and would probably fail to actually fix the issue most of the time.
Perhaps this is the real reason behind the "crack design team" jokes? A wholesale internal switchover from liquid-based stimulants after one too many accidents?
What makes you say that? What did you expect would happen if you spill juice into your laptop?
What they are perhaps fighting is unauthorized repairs, in the sense that they want to be able to void the warranty if some random third party messes with the insides. That's not quite the same thing.
Apple has been very helpful when I brought in a 5 year old macbook pro with keyboard issues, replaced some keys for free on the spot. Also when the batteries of 8 and 9 year old MBAs started to go bad, they said they could replace them but advised me to order batteries from iFixit and do it myself, which I did.
Seems like a huge waste to throw away a $2000+ machine
There are other options besides throwing it away.
You can (a) trade it in for a new Mac (I just received $430 for my 2014 MBP) or (b) sell it for parts on eBay.
Or what if you're a klutz and spill your grape juice glass over your keyboard? Congrats, now you're -$2000 lighter since there's no way to take it apart and clean the sticky mess inside.
You can unscrew a Mac and clean it out. You can also take it into Apple for repair.
"Yeah <normal guy>'s out sick today, I'm his replacement."
*yeet*
In all seriousness I would absolutely love to do this sort of thing IRL, in situations where I'll just make incompetent management etc unimpressed (because I'm showing their inefficiency) and there wouldn't be any real/significant ramifications (eg machines that processed material a couple notches more interesting than what PCI-DSS covers).
But obviously I don't mean I'd literally use the above example to achieve this ;P
I've just learned a bit about (eh, you could say "been bitten by") poorly coordinated e-waste management/refurbishment/etc programs - these can be a horrendously inefficient money-grab if the top-level coordination isn't driven by empathy in the right places. So I would definitely get a kick out of doing something like that properly.
We used to remove the hard drives which takes about 20 seconds on a desktop that has a "tool-less" case. Then donate the computer to anybody including the employees if they want it.
It takes a few minutes to do that on a laptop but it's not that long.
I suspect the disposal company your company contracts with parts them out and resells them. Although if you're literally throwing them in the dumpster, that's not even legal in many jurisdictions.
All of my Apple laptops (maybe even all their products) see about 5 to 8 years of service. Sometimes with me, sometimes as hand-me-downs. So they’ve been pretty excellent at not winding up in the trash.
Even software updates often stretch as far back as 5 year old models, so they’re pretty good with this.
Big Sur is officially supported by 2013 Mac models (8 years).
iOS 15 is supported by the 6s, which was 2015. So 6 years.
And I still know people using devices from these eras. Apple may not be repair friendly, but at the end of the day, their devices are the least likely to end up in the trash.
And here I am sitting at my 2011 Dell Latitude wondering what is so special about that. My sis had my 2013 Sony Duo but that's now become unusable with its broken, built-in battery. Yes, 5 to 8 years of service is nice, but not great or out of norm for a $1000+ laptop.
Parent is talking about laptops, I am talking about laptops, why are you talking about smartphones? Though I also had my Samsung S2plus from 2013 to 2019 in use, and that was fairly cheap. I do not know any IPhone users that had theirs for longer.
> it's the packaging that is massively unfriendly to the environment, stupid.
Of all the garbage my family produces over the course of time, my Apple products probably take less than 0.1% of my family's share of the landfill. Do you find this to be different for you? Or am I speaking past the point you're trying to make here?
Is there an estimate of what the externality cost is for the packaging per unit? Would be useful to compare to other things that harm the environment like eating meat, taking car rides, to know how much I should think about this. E.g. if my iphone packaging is equivalent to one car ride I probably won't concern myself that much, but if it's equivalent to 1000 then yeah maybe I should. Right now I really couldn't tell you which of those two the true number is closer to. I don't expect we would be able to know a precise value but just knowing which order of magnitude it is estimated to be would help.
It absolutely doesn't render the carbon-neutrality of the chip useless. Concern about waste and concern about climate change are bound by a political movement and not a whole lot else. It's not wrong to care about waste more, but honestly its emissions that I care about more.
> It's not wrong to care about waste more, but honestly its emissions that I care about more.
Waste creates more emissions. Instead of producing something once, you produce it twice. That's why waste is bad, it's not just about disposing of the wasted product.
Not if the company that produces them is carbon neutral, which is theoretically the argument here. In general you're obviously correct, but I'd expect most emissions aren't incurred from waste.
Won't that make the chip bigger and/or slower? I think the compactness where the main components are so close together and finely tuned that makes the difference. Making it composable probably means also making it bigger (hence won't fit in as small spaces) and probably slower than what it is. Just my two cents though am not a chip designer.
Just making the SSD (the only part that wears) replaceable would greatly increase the lifespan of these systems, and while supporting M.2 would take up more space in the chassis, it would not meaninfully change performance or power.
Isn’t most of the components in MacBooks recyclable? If I remember correctly, Apple has a recycling program for old Macs so it’s not like these machines goes to landfill when their past their time or broken.
I believe Apple tries to use mostly recyclable components. And they do have a fairly comprehensive set of recycling / trade-in programs around the globe: https://www.apple.com/recycling/nationalservices/
That being said, I haven’t read any third-party audits to know if this is more than Apple marketing. Would be curious if they live up to their own marketing.
> Would be curious if they live up to their own marketing.
Do people really think that companies like Apple et al (who have a huge number of people following them eager to rip into them at ever opportunity) could get away with a "marketing story" like that? Like, really, Apple just making all that up and _not one single person_ whistleblowing on it if it were a lie?
> what they wont do is put the chip in an expandable and repairable system
Because that degrades the performance overall. SoC has proven itself to simply be more performant than a fully hotswappable architecture. Look at the GPU improvements they're mentioning - PCIe 5.0 (yet unreleased) maxes out at 128GB/s, whereas the SoC Apple has announced today is transferring between the CPU/GPU at 400GB/s.
In the end, performance will always trump interchangability for mobile devices.
In this case I'd argue it is, because to communicate between the CPU and GPU on a non-SoC computer, you need to send that data through the PCIe interface. On the M1 SoC, you don't. They operate differently, and thats the main point here. You have to add those extra comparison points.
but it would be nice if the soc itself would be a module that you could upgrade and keep the case/display, it would also cut down on environmental impact probably as well...
They would make less money if they made the chip repairable. This doesn't have to make them evil. Apple being more profitable also means they can lower the cost and push the technological advancement envelop forward faster. Every year we will get that much faster chips. This is good for everyone.
This doesn't mean Apple's carbon footprint has to suffer. If Apple does a better job recycling old Macbooks than your average repair guy who takes an old CPU and puts in a new one in a repairable laptop then Apple's carbon footprint could be reduced. I remember the days when I would replace every component in my desktop once a year, I barely thought about recycling the old chips or even selling them to someone else. They were simply too low value to an average person to bother with recycling them properly or reselling them.
>But what they won't do is put the chip in an expandable and repairable system so that you don't have to discard and replace it every few years. This renders the carbon-neutrality of the chips meaningless. It's not the chip, it's the packaging that is massively unfriendly to the environment, stupid.
Mac computers last way longer than their PC counterparts.
Is apple's halo effect affecting your perception of the mac vs PC market? iPhones last longer because they have much longer software updates, and are more powerful to start with. None of these factors apply to macs vs PCs.
Yeah but some Android stuff and windows stuff is so low-end that it only lasts for like 2 years and then it's functionally obsolete because of software. All the mac stuff from 10 years ago seems to still be able to work and has security updates.
Yeah, but one of them doesn't cost a ton to implement (what they're doing) and the other one would cost them a ton through lost sales (what you're asking for).
I always thought it was strange that "integrated graphics" was, for years, was synonymous with "cheap, underperforming" compared to the power of a discrete GPU.
I never could see any fundamental reason why "integrated" should mean "underpowered." Apple is turning things around, and is touting the benefits of high-performance integrated graphics.
Very simple: thermal budget. Chip performance is limited by thermal budget. You just can't spend more than roughly 100W in a single package, without going into very expensive and esoteric cooling mechanisms.
this is mostly wrong. The real issue has always been memory bandwidth. the highest end consumer x86 CPU has about the same memory bandwidth as a dGPU from 10 years ago. The M1 is extremely competitive with modern dGPUs, only a bit behind a 6900 XT.
If Intel/AMD are seriosus about iGPU, they implement solution for memory bandwidth (they did, Intel Iris Pro eDRAM, AMD gaming consoles using GDDR, upcoming AMD stacking SRAM). So I believe the core problem is that market didn't seriously want great iGPU but just fine with poorer iGPU or rich dGPU by Nvidia.
Apple compares the M1 Max as having similar performance to a nVidia's 3080 Laptop GPU, which scores around 16,500 on passmark. For comparison, the AMD 6900 XT desktop CPU scores 27,000, while the nVidia 3080 desktop GPU scores 24,500.
So the M1 Max is not as fast as a high end desktop GPU. Still, it is incredible that you are getting a GPU that performs slightly less than a last generation 2080 desktop GPU at just 50-60 watts.
Yes, apple's marketing materials claim 400 GB/s, while the 6900 XT is 512 GB/s. This is very easily Googled. While memory bandwidth isn't everything, it is the major bottleneck in most graphics pipelines. An x86 cpu with 3200 MHz memory has about 40 GB/s of bandwidth, which more or less makes high end integrated graphics impossible.
Ah, I misunderstood your comment. When you said it was competitive with the 6900 XT I thought you were talking about GPU performance in general, not just in terms of memory bandwidth.
According to the numbers Apple is touting, the M1 Max is competitive with modern GPUs in general, being on par with—roughly—a 3070 (laptop version) or a 2080 (desktop version). They've still got a ways to go but this is shockingly close, particularly given their power envelope.
> this is mostly wrong. The real issue has always been memory bandwidth.
Not really wrong. Memory bandwidth is only a limitation for a very narrow subset of problems.
I've gone back and forth between server-grade AMD hardware with 4-channel and 8-channel DDR4 and consumer-grade hardware with 2-channel DDR4. For most of my work (compiling, mostly) the extra memory bandwidth didn't make any difference. The consumer parts are actually faster for compilation because they have a higher turbo speed, despite having only a fraction of the memory bandwidth.
Memory bandwidth does limit certain classes of problems, but we mostly run those on GPUs anyway. Remember, the M1 Max memory bandwidth isn't just for the CPU. It's combined bandwidth for the GPU and CPU.
It will be interesting to see how much of that memory can be allocated to a M1 Max. It might be the most accessible way to get a lot of high-bandwidth RAM attached to a GPU for a while.
GP is talking specifically about GPUs. iGPUs are 100% bottlenecked by memory bandwidth; specifically it is the biggeset bottleneck for every single purchasable iGPU on the market (excluding M1 Pro/Max).
Your compute anecdotes have no bearing on (i)GPU bottlenecks.
> I never could see any fundamental reason why "integrated" should mean "underpowered."
There was always one reason: limited memory bandwidth. You simply couldn't cram enough pins and traces for all the processor io plus a memory bus wide enough to feed a powerful GPU. (at least not in a reasonable price)
We solved that almost a decade ago now with HBM. Sure, the latencies aren't amazing, but the power consumption numbers are and large caches can hide the higher access latencies pretty well in almost all cases.
Only time that I can remember HBM being used with some kind of integrated graphics was strange Intel NUC with Vega GPU and IIRC correctly they were on the same die.
That product had an Intel CPU and AMD GPU connected via PCIe on the same package, not the same die. It was a neat experiment, but it was really just a packaging trick.
Still confused how 32 core M1 Max competes with Nvidia's thousands-of-cores GPUs. Certainly there are some things that are nearly linear with core count, or otherwise they wouldn't keep adding cores, right?
The desktop RTX 3070 has 46 SMs, which are the most comparable thing to Apple's cores.
NVIDIA defines any SIMD lane to be a core. They recently have gotten more creative with definition, they were able to double FP32 executions per unit (versus previous gen) and hence in marketing materials, doubled the number of "CUDA cores".
The apples to apples comparison would be CUDA cores to execution units. Basically how many units which can perform a math operation. Apple's architecture has 128 units per core, so a 32 core M1 Max has the same theoretical compute power as 4096 CUDA cores. This of course doesn't take into consideration clock speed or architectural differences.
Perhaps with Vista? "Integrated" graphics meant something like Intel 915 which couldn't run "Aero". Even if you had the Intel 945, if you had low bandwidth RAM graphics performance still stuttered. Good article:
https://arstechnica.com/gadgets/2008/03/the-vista-capable-de...
You are mistaken. On both PS3 and Xbox 360 CPU and GPU is on different chips and made by different vendors(CPU made by IBM and GPU by Nvidia in case of PS3 and CPU by IBM and GPU by ATI for Xbox 360). Nonetheless in PS4/XOne generation they both use single die with unified memory for everything and their GPU could be called integrated.
When they did that they had to deliberately hamstring the SOC in order to ensure it didn’t outperform the earlier models. From a consistency of experience perspective I understand why, but it makes me somewhat sad that the system never truly got the performance uplift that would have come from such a move. That said there were significant efficiency gains from that if I recall.
Longer since integrated graphics used to mean integrated onto the north bridge and it's main memory controller. nForce integrated chipsets with GPUs in fact started from the machinations of the original Xbox switching to Intel from AMD at the last second.
What software is missing? I figured the AMD G-series CPUs used the same graphics drivers and same codepaths in those drivers for the same (Vega) architecture.
My impression was that it was still the hardware holding things back: Everything but the latest desktop CPUs still using the older Vega architecture. And even those latest desktop CPUs are essentially PS5 chips that got binned out.
Deep OS support for unified memory architectures for one. Things they tried to do with HSA etc. Also NVidia winning so much gpu programming mindshare with Cuda, and OpenCL failing to take off on mobile, dooming followon opencl development plans, didn't help.
In the wider picture, gpu compute in general on PC also failed to become mainstream enough to sway consumer choices. Development experience for GPUs is still crap vs the cpu, the languages are mostly bad, there's massive sw platform fragmentation among os vendors and gpu vendors, driver bugs causing OS crashes left and right, etc.
Re your impression, yes, AMD shifted focus more toward cpu from gpu in their SoCs after a while when their initiatives failed to take off outside consoles. But it's been an ok place to be, just keeping the gpu somewhat ahead of Intel competition and getting some good successes in the cpu side.
iGPU Vega is actually really, really good esp when it comes to perf/watt. It is bottlenecked by the slow memory bandwidth. DDR5 will more or less double iGPU performance.
Laptop/desktop have 2 channels.
High-end desktop can have 4 channels.
Servers have 8 channels.
How does Apple do that?
I was always assuming that having that many channels is prohibitive in terms of either power consumption and/or chip size.
But I guess I was wrong.
It can't be GDDR because chips with the required density don't exist, right?
You aren't wrong, Apple is able to do this because implementing LPDDR is much more efficient from both a transistor and power consumption point of view, and is actually faster too. The tradeoff is you can't put 8 or 16 dram packages on the same channel like you can with regular DDR, which means that the M1 Max genuinely has a 64 GB limit, while a DDR system with the same bandwidth would be 1 TB. Fortunately for Apple there isn't really a market for a laptop with a TB of RAM.
DDR4 is the common desktop/laptop chip. LPDDR5 is cell-phone chip, so its kinda funny to see a low-power RAM being used in a very wide fashion like this.
Don't cell phones sell more than desktops, laptops, and servers? Smartphones aren't a toy: they are the highest volume computing device.
They are also innovating with things like on-chip ECC for LPDDR4+, while desktop DDR4 still doesn't have ECC thanks to Intel intentionally gimping it for market segmentation.
Well sure. But that doesn't change the fact that DDR5 doesn't exist in any real numbers.
LPDDR5 is a completely different protocol from DDR5 by the way, just like GDDR5 is completely different from DDR3 it was based on. LPDDR3 was maybe the last time the low-power series was something like DDR3 (the mainline).
Today, LPDDR5 is based on LPDDR4, which diverged significantly from DDR4.
> They are also innovating with things like on-chip ECC for LPDDR4+
DDR5 will have on-chip ECC standard, even unbuffered / unregistered.
That sound like HBM2, maybe HBM3 but that would be the first consumer product to include it afaik.
Basically the bus is really large, and the memory dies must be really close to the main processing die. Those memory were notably on the RX Vega from AMD, and before that on the R9 Fury.
They did that to compare against the last comparable Intel chips in a Mac, which seems rather useful for people looking to upgrade from that line of Mac.
How is it disingenuous - defined in my dictionary as not candid - when we know precisely which chips they are comparing against?
They are giving Mac laptop users information to try to persuade them to upgrade from their 2017 MacBook Pros and this is probably the most relevant comparison.
Intel's 2021 laptop chips (Alder Lake) are rumoured to be released later this month (usually actual availability is a few months after "release"). I expect them to be pretty compelling compared to the previous generation Intel parts, and maybe even vs AMD's latest. But the new "Intel 7" node (formerly 10++ or something) is almost certainly going to be behind TSMC N5 in power and performance, so Apple will most likely still have the upper hand.
Various leaked benchmarks show it outperforms the comparable Ryzens (and it's pretty obvious these rumours are sanctioned by Intel by their copious omission of watt numbers).
The slide where they say its faster than an 8-core PC laptop CPU is comparing it against the 11th gen i7 11800H [1]. So it's not as fast as the fastest laptop chip, and it's certainly not as fast as the monster laptops that people put desktop CPUs in. But it uses 40% of the power of a not-awful 11th gen 8-core i7 laptop. The M1 is nowhere near as fast as a full-blown 16 core desktop CPU.
I am sure we will see reviews against high end intel and amd laptops very soon, and I wont be surprised if real world performance blows people away, as the M1 Air did.
When M1 first released they pulled some marketing voodoo and you always saw the actively cooled performance numbers listed with the passively cooled TDP :D Nearly every tech article/review was reporting those two numbers together.
1- want to convince people still on Intel Macs to update
2- lengthen the news cycle when the first units are shipped to the tech press and _they_ run these benchmarks
For me, think about that memory bandwidth. No other CPU comes even close. A Ryzen 5950X can only transfer about 43GB/s. This thing promises 400GB/s on the highest-end model.
As always, though, the integrated graphics thing is a mixed blessing. 0-copy and shared memory and all of that, but now the GPU cores are fighting for the same memory. If you are really using the many displays that they featured, just servicing and reading the framebuffers must be...notable.
A high end graphics card from nvidia these days has 1000GB/s all to itself, not in competition with the CPUs. If these GPUs are really as high of performance as claimed, there may be situations where one subsystem or the other is starved.
No consumer CPU comes close. Just saw an article about the next-gen Xeon's with HBM though that blows even this away (1.8TB/s theoretically), but what else would one expect from enterprise systems. Getting pretty damn excited about all the CPU manufacturers finally getting their asses into gear innovation-wise after what feels like a ridiculously long period of piss-warm "innovation".
The benchmark to power consumption comparisons were very interesting. It seemed very un-Apple to be making such direct comparisons to competitors, especially when the Razer Blade Advanced had slightly better performance with far higher power consumption. I feel like typically Apple just says "Fastest we've ever made, it's so thin, so many nits, you'll love it" and leaves it at that.
I'll be very curious to see those comparisons picked apart when people get their hands on these, and I think it's time for me to give Macbooks another chance after switching exclusively to linux for the past couple years.
I think that for the first time, Apple has a real performance differentiator in its laptops. They want to highlight that.
If Apple is buying Intel CPUs, there's no reason making direct performance comparisons to competitors. They're all building out of the same parts bin. They would want to talk about the form factor and the display - areas where they could often out-do competitors. Now there's actually something to talk about with the CPU/GPU/hardware-performance.
I think Apple is also making the comparison to push something else: performance + lifestyle. For me, the implication is that I can buy an Intel laptop that's nicely portable, but a lot slower; I could also buy an Intel laptop that's just as fast, but requires two power adapters to satisfy its massive power drain and really doesn't work as a laptop at all. Or I can buy a MacBook Pro which has the power of the heavy, non-portable Intel laptops while sipping less power than the nicely portable ones. I don't have to make a trade-off between performance and portability.
I think people picked apart the comparisons on the M1 and were pretty satisfied. 6-8 M1 performance cores will offer a nice performance boost over 4 M1 performance cores and we basically know how those cores benchmark already.
I'd also note that there are efforts to get Linux on Apple Silicon.
Apple used to do these performance comparisons a lot when they were on the PowerPC architecture. Essentially they tried to show that PowerPC-based Macs were faster (or as fast as) Intel-based PCs for the stuff that users wanted to do, like web browsing, Photoshop, movie editing, etc.
This kind of fell by the wayside after switching to Intel, for obvious reasons: the chips weren’t differentiators anymore.
Apple almost single-handedly made computing devices non-repairable or upgradable; across their own product line and the industry in general due to their outsized influence.
Just today I got one 6s and one iPhone 7 screen repaired(6s got the glass replaced, the 7 got full assembly replaced) and battery of the 6s replaced at a shop that is not authorized by Apple. It cost me 110$ in total.
Previously I got 2017 Macbook Air SSD upgraded using an SSD and an adapter that I ordered from Amazon.
What’s that narrative that Apple devices are not upgradable or repairable?
It simply not true. If anything, Apple devices are the easiest to get serviced since there are not many models and pretty much all repair shops can deal with all devices that are still usable. Because of this, even broken Apple devices are sold and bought all the time.
>Just today I got one 6s and one iPhone 7 screen repaired
Nice, except doing a screen replacement on a modern iPhone like the 13 series will disable your FaceID making your iPhone pretty much worthless.
>Previously I got 2017 Macbook Air SSD upgraded using an SSD and an adapter that I ordered from Amazon
Nice, but on the modern Macbooks, the SSD is soldered and not replaceable. There is no way to upgrade them or replace them if they break, so you just have to throw away the whole laptop.
So yea, parent was right, Apple devices are the worst for reparability period since the ones you're talking about are not manufactured anymore therefore don't represent the current state of affairs and the ones that are manufactured today are built to not be repaired.
Hardware people are crafty, they find ways to transfer and combine working parts. The glass replacement(keeping the original LCD) I got for the 6S is not a procedure provided by Apple. Guess who doesn’t care? The repair shop that bought a machine from China for separating and re-assembly of the Glass and LCD.
Screen replacement is 50$, glass replacement is 30$.
iPhone 13 is very new, give it a few years and the hardware people will leverage the desire of not spending 1000$ on a new phone when the current one works fine except for that broken part.
Only if Apple wants to let them as far as I have seen. The software won't even let you swap screens between iPhone 13s. Maybe people will find a work around, but it seems like Apple is trying its hardest to prevent it.
And yet they authorize shops to perform these repairs. They’re not trying to prevent repairs, they’re trying to ensure repairs use Apple-supplied parts. Which, sure, you may object to that… but it’s very different from saying they’re preventing repairs full stop. And there’s very little chance such an effort would do anything other than destroy good will.
By changing chips. There are already procedures for fun stuff like upgrading the RAM on the non-retina MacBook Airs to 16GB. Apple never offered 16GB version off that laptop but you can have it[0].
You clearly don't have a clue how modern Apple HW is built and why stuff that you're talking about on old Apple HW just won't work anymore on the machines build today.
I'm talking about 2020 devices where you can't just "change the chips" and hope it works like in the 2015 model from the video you posted.
> I would love to be enlightened about the new physics that Apple is using which is out of reach to the other engineers.
That’s known as private-public key crypto with keys burnt into efuses on-die on the SoC.
You can’t get around that (except for that one dude in Shenzhen who just drills into the SoC and solders wires by hand which happen to hit the right spots). But generally, no regular third party repair shop will find a way around this.
I know about it, it simply means that someone will build a device that automates the thing that the dude in Shenzhen does or they will mix and match devices that have different kind of damage. I.e. if a phone that has destroyed screen(irreparable) will donate its parts to phones that have the face id lens broken.
You know, these encryption authentications work between ICs and not between lenses and motors. Keep the coded IC, change the coil. Things also have different breaking modes, for example a screen might break down due to the glass failure(which cannot be coded) and the repair shop can replace the broken assembly part when keeping the IC that ensures the communication with the mainboard. Too complicated for a street shop? Someone will build a service that does it B2B, shops will ship it ti them, they will ship it back leaving only the installation to the street shop.
Possibilities are endless. Some easier some harder but we are talking about talent that makes all kind of replicas of all kind of devices. With billions of iPhones out there, it's actually very lucrative market to be able to salvage 1000USD device, their margins could be even better than the margins of Apple when they charge 100USD to change the glass of the LCD assembly.
I know Luis, he made a career of complaining that it's impossible to repair Apple devices when repairing Apple devices.
Instead of watching videos and getting angry about Apple devices being impossible to repair, I get my Apple devices repaired when something breaks. Significantly more productive approach, you should try it.
Louis makes "Apple impossible to repair" videos since ever. It's not an iPhone 13 thing, give it a few year and you can claim that iPhone 17 impossible to repair, unlike the prehistoric iPhone 13.
He recently moved to a new larger shop in attempt to grew his Apple repair operations. Then had to move back to a smaller shop because as it turns out, it wasn't Apple who is ruining his repair business.
You don't. It's a technological progress similar to one where we lost our ability to repair transistors with introduction of chips. If this doesn't work for you you should stick with the old tech, I think the Russians did something like that on their soviet era plane electronics. There are also audiophiles who don't even switch to transistor and use vacuum tubes. Also the Amish who stick to the horses and candles who choose to preserve their way of doing things and avoid the problems of electricity and powered machinery.
You will need to make a choice sometimes. Often you can't have small efficient and repairable all the time.
> Nice, but on the modern Macbooks, the SSD is soldered and not replaceable. There is no way to upgrade them or replace them if they break, so you just have to throw away the whole laptop.
I mean, you can replace the logic board. Wasteful, sure, but there's no need to throw out the whole thing.
In modern Apple laptops (2018 and later), the storage is soldered as the memory has been since 2015. Contrast this with a Dell XPS 15 you can buy today within which you can upgrade/replace both the memory and the storage. This is the case with most Windows laptops. The exception is usually the super thin ones that solder in RAM Apple-style, but there are some others that do as well.
There's also the fact that Apple does things like integrate the display connector into the panel part. So, if it fails - like when Apple made it too short with the 2016 and 2017 Macbook Pros causing the flexgate controversy - it requires replacing a $600 part instead of a $6 one.
True, but you are talking about devices that are 4-6 years old. Storage is now soldered. Ram has been soldered for a while now, and with Apple Silicon its part of the SoC.
not that I've heard of anyone popping out a DIMM over time, but I'd rather pop it back in rather than having to ship it to a repair shop with BGA workstation to replace if a DRAM chip develops fault over time.
newer MacBooks have both the SSD and RAM soldered on board, it's no longer user upgradable, unless you have a BGA rework station and knows how to operate it.
>According to iFixit, the Surface Laptop isn’t repairable at all. In fact, it got a 0 out of 10 for repairability and was labeled a “glue-filled monstrosity.”
The lowest scores previously were a 1 out of 10 for all previous iterations of the Surface Pro
I'm still daily driving a 2015MBP. Got the battery replaced, free under warranty, a few years ago. Running lates MacOS without any issues
The phones in my family are an iPhone 6S, iPhone 8 and an iPhone XS. All running the latest iOS. The 6S got a battery swap for 50€, others still going strong.
Similar with tablets, we have three and the latest one is a 2017 iPad Pro. All running the latest iPadOS.
Stuff doesn't need to be repairable and upgradable if it can outlast the competition by a factor of two while still staying on the latest official OS update.
Can't do that with any Android device. A 6 year old PC laptop might still be relevant though.
Apparently, you didn't compare Apple devices with what the bulk of the market consists of.
Also, implying that repairability is required for environmental sustainability is questionable at best. People in their vast majority tend to get rid of 5 years old phones and laptops.
FWIW, they are in general quite accurate with their ballpark performance figures. I expect the actual power/performance curves to be similar to that they showed. Which is interesting, because IIRC on the plots from Nuvia before they were bought their cores had a similar profile. It would be exciting if Qualcomm could have something good for a change.
> I'll be very curious to see those comparisons picked apart when people get their hands on these, and I think it's time for me to give Macbooks another chance after switching exclusively to linux for the past couple years.
I really enjoy linux as a development environment, but this is going to be VERY difficult to compete with..
I skip getting a Starbuck's latte, and avoid adding extra guac at Chipotle.
I'm kidding, that stuff has no affect on anything.
Justifiable, as in "does this make practical sense", is not the word, because it doesn't. Justifiable, as in, "does it fit within my budget?" yes that's accurate. I don't have a short answer to why my personal budget is that flexible, but I do remember there was a point in my life where I would ask the same thing as you about other people. The reality is that you either have it or you don't. That being said, nothing I had been doing for money is really going to max this kind of machine out or improve my craft. But things that used to be computationally expensive won't be anymore. Large catalogues of 24 megapixel RAWs used to be computationally expensive. Now I won't even notice, even with larger files and larger videos, and can expand what I do there along with video processing, which is all just entertainment. But I can also do that while running a bunch of docker containers and VMs... within VMs, and not think about it.
This machine, for me, is the catalyst for greater consumptive spending though. I've held off on new cameras, new NASs, new local area networking, because my current laptop and devices would chug under larger files.
Hope there was something to glean from that context. But all I can really offer is "make, or simply have, more money", not really profound.
There's also future-proofing to some degree. I'll probably get a somewhat more loaded laptop than I "need" (though nowhere near $6K) because I'll end up kicking myself if 4 years from now I'm running up against some limit I underspeced.
Yeah I forgot to mention that, its a given for me.
Like there’s the potential tax deductibility, along with being a store of value (it will probably be $2300 in a few years but thats okay), making it easier to rationalize future laptops in the future by trading this one in. But I’m not betting on any of that.
I’ve just been waiting for this specific feature set, I’m upgrading from a maxed out dual GPU 2015 MBP that I purchased in 2017.
I skipped the whole divergence and folly.
No butterfly keyboards, no tolerating usbc while the rest of the world caught up, no usbc charging, no touch bar, I held out. And now I get Apple Silicon which already had rave reviews and blew everything else out of the water in the laptop space, and now I get the version with the RAM I want.
Surprisingly little fanfare, on my end. Which is kind of funny because I remember fondly configuring expensive maxed out Apple computers on their website that I could never afford. Its definitely more monumental if you save money for one specific thing and achieve that. But now I just knew I was already going to do it if Apple released a specific kind of M1 upgrade in a specific chassis, which they did and more. So it fit within my available credit, and which I’ll pay off likely by the end of the week, and I’m also satisfied that I get the points and a spending promotion my credit card had told me about.
A few thousand dollars per year (presumably it will last more than one year) is really not much for the most important piece of equipment a knowledge worker will be using.
I mean, the Audi R8 has an MSRP > $140k and I've never been able to figure out how that is justifiable. So I guess dropping $6k on a laptop could be "justified" by not spending an extra $100k on a traveling machine?
To be clear, I'm not getting one of these, but there's clearly people that will drop extra thousands into a "performance machine" just because they like performance machines and they can do it. It doesn't really need to be justified.
Truthfully, I'm struggling to imagine the scenario where a "performance laptop" is justifiable to produce, in the sense you mean it. Surely, in most cases, a clunky desktop is sufficient and reasonably shipped when traveling, and can provide the required performance in 99% of actual high-performance-needed scenarios.
If I had money to burn, though, I'd definitely be buying a luxury performance laptop before I'd be buying an update to my jalopy. I use my car as little as I possibly can. I use my computer almost all the time.
and yet, when I commented on Apple submissions about 16GB of maximum RAM being not enough in 2021, especially at that price point, people answered to me that I was bragging and their M1 Air with 8GB of RAM was more than enough to do everything, including running a production kubernetes cluster serving thousands of customers.
When commenting on Mac hardware it is always difficult for me to separate wishful thinking, cultism and actual facts.
That's fundamental to how NAND flash memory works. For high-end PCIe Gen4 SSD product lines, the 1TB models are usually not quite as fast as the 2TB models, and 512GB models can barely use the extra bandwidth over PCIe Gen3. But 2TB is usually enough to saturate the SSD controller or host interface when using PCIe Gen4 and TLC NAND.
Not OP but ordered a maxxed out 16" with 1TB SSD (can't justify 2k more for disk space, I'll just buy an external and curb my torrenting).
My work flow is intensive yet critical:
I have at all times the following open:
ELECTRON APPS: Slack, Telegram, Teams, Discord, Git Kraken, VSCode (multiple workspaces hosting different repos all running webpack webservers with hot module reloading), Trading View.
NATIVE APPS: Firefox (10 - 32 tabs, many with live web socket connections such as stock trading sites, various web email providers, and at least one background YouTube video or twitch stream), Chrome (~6 tabs with alternate accounts using similar web socketed connections), iTerm, Torrent client (with multiple active transfers).
All of this is being displayed on two external 4k screens + the laptop.
So ya, I can justify maxxed out specs as my demands are far higher than that of an average user and that's with me actively closing things I don't need. Also my work will happily pay for it, so why not?
Perspective. It was a noise word really. Imagine instead a contractor working $100 an hour and pulling enough hours to make $200k a year. Does that change the discussion any? I don't believe so.
Based on the numbers it looks like the M1 Max is in the RTX 3070-3080 performance territory. Sounds like mobile AAA gaming has potential to reach new heights :D
> Based on the numbers it looks like the M1 Max is in the RTX 3070-3080 performance territory.
The slides are comparing to a laptop with a 3080 Mobile, which is not the same as a normal RTX 3080. A desktop 3080 is a power hungry beast and will not work in a laptop form factor.
It's not a function of capability. I spent $4,000 on a 2019 MBP, including $750 for a Vega 20. It plays Elder Scrolls Online WORSE than a friend's 2020 with INTEGRATED graphics. (I guess Bethesda gave some love to the integrated chipset, and didn't optimize for the Vega. It hitches badly every couple of seconds like it's texture thrashing.)
Whatever AAA games that might have gotten some love on the Mac (and there are some), it's going to be even harder to get game companies to commit to proper support to the M1 models. Bethesda has said they won't even compile ESO for M1. So I will continue to run it on a 12-year-old computer running an ATHLON 64 and a nVidia 9xx-series video card. (This works surprising well, making the fact that my Mac effectively can't play it all the more galling.)
I'm never going to try tricking out a Mac for gaming again. I should have learned my lesson with eGPU's, but no. I thought, if I want a proper GPU, it's going to be built in. Well, that doesn't work either. I've wasted a lot of money in this arena.
Well, Apple is selling M1 Macs like hotcakes, so it won't be too long until it'll be stupid not to support them. Also, texture thrashing isn't really an issue when you've got a shared CPU/GPU memory with 64 GB of space. Just cache like half the game in it lol
EVE Online now supports M1. But regardless, now that MacBooks are capable gaming machines (definitely not the case in the past), and a core demographic of Mac users overlap with the gaming demographics (20-40 yo), I really think it’s just a matter of time now.
I would argue that Macs with PC-shared Intel CPU's and AMD GPU's should have been much EASIER to support than the new, completely-different architecture, and that hasn't really happened.
Sure, it's candy for tech people, but the average person is going to scoff at a $2000 laptop. They can buy a functional laptop and a better gaming for that price. It's not going to change the gaming market.
Apple had a shot at making Mac gaming a reality around 2019. They decided to axe 32-bit library support though, which instantly disqualifies the lion's share of PC games. You could still theoretically update these titles to run on MacOS, but I have little hope that any of these developers would care enough to completely rewrite their codebase to be compatible with 15% of the desktop market share.
Yeah, and they also deprecated OpenGL, which would have wiped out most of those games even if the 32-bit support didn't. I'm not expecting to see much backwards compatibility, I'm expecting forwards compatibility, and we're starting to see new titles come out with native Apple Silicon support, slowly, but surely.
I wouldn't hold your breath. Metal is still a second-class citizen in the GPU world, and there's not much Apple can do to change that. Having powerful hardware alone isn't enough to justify porting software, otherwise I'd be playing Star Citizen on a decommissioned POWER9 mainframe right now.
The major factor Apple has working in their favor with regards to the future of gaming on Macs is iOS. Any game or game engine that wants to support iPhone or iPad devices is going to be most of the way to supporting ARM Macs for "free".
My older Intel Macs I'm sure are more or less SOL but they were never intended to be gaming machines.
That won't get the top 10 Steam games running on MacOS. There's just too great of a disparity in the tooling, a 'convergence' like you're describing would take the better half of a decade, conservatively speaking. And even if they did converge, that's only guaranteeing you a portion of the mobile market, and just the new games at that. Triple-A titles will still be targeting x86 first for at least the next 5 years, and everything after that is still a toss-up. There's just too much uncertainty in the Mac ecosystem for most games developers to care, which is why it's a shame that Proton doesn't run on Macs anymore. Apple's greatest shot at a gaming Mac was when MacOS had 32-bit support.
Your older Intel Macs are probably just fine for gaming, too. I play lots of games on my 2016 Thinkpad's integrated graphics, Minecraft, Noita, Bloons Tower Defense 6, all of these titles work perfectly fine, even running in translation with Proton. If you've got a machine with decent Linux support, it's worth a try.
Unreal, Unity, Lumberyard, and Source 2 all support iOS and thus Metal on ARM already. A game developer using one of those engines should generally be able to just click a few buttons to target an additional platform unless they've gone around the engine's framework in ways that tie their title to their existing platform(s). Obviously in all but the most trivial cases there will still be work to be done, but those game developers using a major commercial engine are doing so because someone else has already done most of the hardest work in platform support for them.
That means six of the top 10 could add native MacOS support with relative ease (as in significantly less work than doing it from scratch) if they wanted to. The three Source titles are likely stuck on DX/OGL platforms forever because it doesn't really make sense to rework such an old engine, but at least the two Valve in-house titles have had persistent rumors of a Source 2 update for years.
I mean…the “tooling” these days is usually just Unity3D. And Unity supports Apple silicon as a compile target. Tell me if I’m wrong, but it seems like the ability to support multiple platforms and architectures in gaming has never been easier.
> iPhone games are an entire different beast however, and likely not what people “want”.
Most of those mobile games you're thinking of are made with Unity, Unreal, or one of a few other general purpose game engines. Those same engines are used for a significant chunk of PC games as well. The AAA developers who have in-house engines like to reuse them as well. It doesn't matter if a given game does or does not support mobile if it uses an engine that does.
IIRC, there are some efforts to translate Vulkan to Metal similar to how the WINE project translates DirectX into OpenGL/Vulkan, but that's still an imperfect workaround.
After committing to Metal and then killing 32-bit support and then switching to ARM, Apple has made it clear that video games are dead on MacOS. I don't know what people are going to do with these new GPUs but it's not going to be gaming.
And in the case for the M1 Pro, apple is showing it to be faster than the Lenovo 82JW0012US - which has a RTX 3050ti. so the performance could be between RTX 3050ti - RTX 3060. All of this with insanely low power draw.
But still not fanless, right? Maybe they'll update the Macbook Air with some better graphics as well, so that one could do some decent gaming without a fan.
Air is already at the thermal limit range with the 8 core GPU, will probably have to wait until the next real iteration of the technology in the chip (M2 or whatever) which increases efficiency instead of just being larger.
If Proton is made to work on Mac with Metal, there's some real future here for proper gaming on Mac. Either that or Parallels successfully running x64 games via Windows on ARM virtualization.
I’ve been looking into trying CrossOver Mac on my M1 Max for exactly this reason. After seeing the kinds of GPUs Apple is comparing themselves to, I’m very hopeful.
based on the relative differences between the m1 and the m1 pro/max, and also the comparisons shown by apple to other laptops from the MSI and Razerblade both featuring the RTX 3080.
This, I currently run triple 24inch monitors in portrait off my work 2019 16" laptop rather smoothly. Unfortunately the M1 couldn't run it. The new 14 and 16 max can do triple 4k.
Up to two external displays with up to 6K resolution at 60Hz at over a billion colors (M1 Pro) or
Up to three external displays with up to 6K resolution and one external display with up to 4K resolution at 60Hz at over a billion colors (M1 Max)
When they make those statements I'm always curious if they are expecting the user to use two external with the lid closed, or open (which would be 3 displays).
I've always found MacBooks don't play well when the lid is closed, but maybe that has changed?
Same. I am impressed with M1 Pro and M1 Max performance numbers. I ordered the new MBP to replace my 2020 M1 MBP, but I bought it with the base M1 Pro and I'm personally way more excited about 32gb, 1000 nits brightness, function row keys, etc.
Any indication on the gaming performance of these vs. a typical nvidia or AMD card? My husband is thinking of purchasing a mac but I've cautioned him that he won't be able to use his e-gpu like usual until someone hacks support to work for it again, and even then he'd be stuck with pretty old gen AMD cards at best.
>Testing conducted by Apple in September 2021 using preproduction 16-inch MacBook Pro systems with Apple M1 Max, 10-core CPU, 32-core GPU, 64GB of RAM, and 8TB SSD, as well as production Intel Core i9-based PC systems with NVIDIA Quadro RTX 6000 graphics with 24GB GDDR6, production Intel Core i9-based PC systems with NVIDIA GeForce RTX 3080 graphics with 16GB GDDR6, and the latest version of Windows 10 available at the time of testing.
In terms of Hardware M1 Max is great. On paper you wont find anything match its performance under load. As even Gaming / Content Creation Laptop throttle after a short while.
The problem is gaming isn't exactly a Mac thing. From Game selection to general support on the platform. So really performance should be the least of your concern if you are buying a Mac for games.
I'm not aware of a single even semi-major game (including popular indie titles) that run native on M1 yet. Everything is running under Rosetta and game companies so far seem completely uninterested in native support.
Neither of which are exactly AAA recent games, WoW is from 2004 right ? The other ones I can think of are the old GTA titles which run "Natively" since they have an iOS port
While at the same time you have (running on Rossetta):
* Deus Ex: Mankind Divided
* Dying Light
* Rise of the Tomb Raider/Shadow of the Tomb Raider
>though I also don't know if that is working yet on the M1 platform
Dual booting isn't working and likely not any time soon as Microsoft does not intend to support Apple M1 [1]. And I doubt Apple have intention to port their GPU Metal Drivers to Windows. ( Compared to using AMD Drivers on Windows in the old days )
He will likely need to use some sort of VM solution like Parallel. [2]
Wow, I had no idea M1 doesn't support eGPUs. I was planning on buying an external enclosure for Apple Silicon when I upgraded; thanks for pointing that out.
Not only that, but you're stuck with cards from 5 years ago with current macbooks. It's a shame too because the plug and play support is better than the very best e-gpu plug and play on linux or windows.
I don't see Apple personally adding support any time soon, either. Clearly their play now is to make all hardware in house. The last thing they want is people connecting 3090s so they can have an M1 Max gaming rig. They only serve creators, and this has always been true. Damn waste if you ask me.
You could use the new AMD cards at least though right? I don't think Nvidia support will ever happen again though (I got burned by that, bought a 1060 right before they dropped Nvidia).
I'm on a RX 5700XT with my Hackintosh, and it works well.
Edit: Thinking about this more.. I bet third party GPUs are a dead end for users and Apple is planning to phase them out.
I think I'd wait to see what the Mac Pro supports before coming to that conclusion. Could be something they're still working on for that product, and then when it's working on the Apple Silicon build of macOS will be made available on laptops as well.
I wouldn't get one of those for games, better get a Windows PC and a M1 MacBook Air, cost should be about the same for both. Game support just won't be there if you care about gaming.
The video moved a bit too fast for me to catch the exact laptops they were comparing. They did state that the M1 Pro is 2.5x the Radeon Pro 5600M, and the M1 Max is 4x the same GPU.
The performance to power charts were comparing against roughly RTX 3070 level laptop cards.
Well, no - they immediately followed the discrete laptop graphic comparison with a desktop graphic comparison, highlighting how much more power they draw for "the same" performance.
> Well, no - they immediately followed the discrete laptop graphic comparison with a desktop graphic comparison
pretty sure the comparison was with "the most powerful PC laptop we could find", which makes sense because they then talked about how much it was throttled when running only on battery while the new M1 is not.
That wasn't for desktop graphics, it was for a top-end laptop graphics SKU (I think RTX 3080 Mobile on a ~135W-150W TDP configuration?). Otherwise the graph would extend all the way to 360W for a RTX 3090.
And I think based on these numbers that a desktop 3090 would have well over double the performance of the M1 Max. It's apples to oranges, but lets not go crazy just yet.
Now, I am extremely excited to see what they will come up with for the Mac Pro with a desktop thermal budget. That might just blow everything else by any manufacturer completely out of the water.
The chart only shows a line up to about 105W, so it's not clear what they're trying to represent there. (Not that there's any question this seems to be way more efficient!)
GPU workloads are very parallel. By throwing more transistors at the problem while lowering clock rates you can get pretty good performance even in a constrained power budget.
Equivalent to "the notebook with the fastest GPU we could find at half the power" is how I remember it...
I'm just not entirely certain what GPU performance does for me...? I don't work with video, there aren't any games, and I'm not playing them, anyway. Does iTerm2 scrolling get better?
I used to be quite happy with the GeForce 7xx(?) and tensorflow, and this seems like it would have quite a bit of power for ML. Unfortunately, the software just isn't there (yet?).
Well I'm not necessarily a fan of the naming but assuming Max stands for maximum, it's pretty clearly the best one. The one you get if you want to max it out. But they should've called it Pro Max for consistency with the iPhones...
...or just released the full documentation for them. Apple being Apple and wanting full control over its users, I don't see that happening. I really don't care how fast or efficient these are, if they're not publicly documented all I think is "oh no, more proprietary crap". Apple might even make more $$$ if it wanted to open up, but it doesn't.
I mean the documentation of the SoC itself. The thousands (or perhaps tens of thousands) of pages of information about every register and peripheral on it.
Same wish here. Last I tried a few months ago, I was unable to compile UE4 as well. These machines would be great for UE4 dev if only the compatibility was there. I wonder if the politics between Epic and Apple has made efforts in this area a lower priority.
They are not against games, they just don't care about supporting anything else that's not coming through their frameworks and the app store. This can easily verified by the way-too-long segments of game developer demos at the annual WWDC.
Thats not how industry works, thats not how any of this works. iPhone ecosystem is big enough to move itself forward, but desktop market plays by different rules.If you don't follow what majority of the market do, it's much cheaper to just ignore that tiny customer segment which requires totally alien set of technologies
Which is precisely what I said. They don't care that the larger gaming market ignores their platform. Apple Arcade and other gaming related endeavours all aim at the casual mobile gamer market.
>If you don't follow what majority of the market do, it's much cheaper to just ignore that tiny customer segment which requires totally alien set of technologies
iOS and Macs both use Metal.
You can't refuse to support Metal without missing out on a very big slice of total gaming revenue.
That Steam stat is probably a chicken and the egg situation. I know I don’t run Steam on my Macbook because there’s nothing I want to play — but I would if there was.
Still the Mac marketshare is not that high (~15%?) but might start looking attractive to developers looking to “get in first” when hardware that can actually run games becomes available (cough).
I mean, it’s similar to Linux, right? Linux has about 2% on Steam, and that’s with compatibility layers like wine allowing many windows games to run cross platform. This [0] puts Mac at 9.5% of operating systems overall and Linux at 2.4%.
But games with native Linux support are not very common compared to Windows, even though it’s mainly a matter of supporting Vulkan, which many modern games already do. My point is that even though Linux should be relatively easy to support natively (compared to mac not supporting cross-platform graphics APIs out of the box), devs aren’t putting the effort in.
I really hope this changes, and hopefully mac “gaming-level” hardware could help push cross-platform work along.
Desktop games and mobile games are not the same. On mobile pretty much every heavy game uses either UE or Unity. High end PC games use custom engines that are heavily tuned for x86 and use different APIs. Metal port would be expensive and not worth it.
Most games use licensed engines, most AAA games use their own engines. Only 3 out of top 10 games in Steam use engine that support metal. More than a half of games in top 100 use engines that don't support metal. In this month we have few prominent releases on PC: Guardians of the Galaxy, Far Cry 6, Age of Emires 4, FIFA 22, The Dark Pictures Anthology: House of Ashes and Back 4 Blood. Out of these 6 titles only last 2 use UE4, others use their own custom engines. And I could go on and on.
Blender has strong use case in the animation and movie ecosystems. RenderMan, Pixar has some strong connections with Jobs and in turn with Blender, games may not really be in their radar for Blender sponsorship.
Besides supporting creator workflows (Final Cut Pro, best in class laptop graphics, Blender etc) doesn't mean they want to directly support gamers as buyers, just that they believe creators who produce games (or other media) are strong market for them go after.
The marketing is designed strongly towards the WFH post pandemic designer market. They either had to ship their expensive Mac desktops to their home or come in and work at office last year. This laptop's graphics performance pitch is for that market to buy/upgrade now.
The M1 Pro & Max CPUs are approaching dedicated Laptop GPU levels, so Minis will be a reasonable low/mid-end game machine, comparable to the best PC gaming laptops.
The M* Mac Pros will start out with built-in graphics starting at the M1 Max level and going 4 or perhaps 8x higher, still using integrated graphics.
The fact that Apple now supports $5K dedicated graphics cards on the pro series suggests that the pro M* Macs will be able to use any graphics card (that supports Metal) that you can buy.
The fact that Apple's store was crashing continually after the new laptops were announced suggests that the market for Mac games is going to grow a lot faster than anyone thinks...
It is a good machine yes, although I wouldn't go too much store crashing as an indicator of popularity, that could just be poorly managed web services at Apple.
Delivery dates are even now within 1 week after shipping starts (Nov 3-10), so demand seems within their initial estimates.
Hardware is not only reason gaming is not strong on Mac. Microsoft had a decent hardware and software offering for their Nokia phones in the later years, that didn't help them.
It will take ecosystem of developers investing years of effort in building a deep catalog. Game publishers are not going to risk spending that kind of effort unless there is already enough of market for large titles to recover the money, while this can grow organically, to compete with MS who have XBox as well as dominance in PC gaming, Apple will need to be active in their attempt.
A lot of effort in dev community engagement, incentivize publishers to release on their platform, get exclusive deals etc. After all this, it still may fail. Apple has not shown any interest in even trying to do that so far.
I just wanted to point out that delivery times have now bumped to Nov 26 -> Dec 6 for the same one I ordered on Monday afternoon ago that was Nov 10 -> Nov 17. In less than 2 days, the shipment has been bumped by 2 weeks or more.
Thats interesting, it could be smaller initial supplies perhaps chip shortage or supply chain issues - The average wait time for ships is quite bad.
I would expect demand to taper off during pre order phase. People who would pre order are likely to do it earlier than later, to not have to wait for a month like now.
The rest are going to wait for reviews be able to check it out at a store and buy one when they need it/afford it or at a store over the next months.
This is roughly in line with what I expected, given the characteristics of the M1. It's still very power efficient and cool, has more CPU cores, a lot more GPU cores, wider memory controller, and presumably it has unchanged single core performance.
Apple clearly doesn't mean these to be a high performance desktop offering though because they didn't even offer an Mac Mini SKU with the new M1s.
But what I'm really curious about is how Apple is going to push this architecture for their pro desktop machines. Is there a version of the M1 which can take advantage of a permanent power supply and decent air flow?
I don't think they are going to make desktop versions, they'll probably put the pro and max versions in a new iMac body during Q2 2022 and might add config options to mac mini. Might be for supply chain reasons, focusing 100% on macbook pro production to meet the absolutely massive incoming demand.
I think it likely that Apple will just ramp up what they've been doing so far - make an "M1 Ultra" that has 32 or 64 or even 128 cores of CPU, at least that many GPU and scale the memory and I/O in the same way. Put the one with fewer cores in the iMac Pro and the one with the most cores in the Mac Pro.
Every couple of years when they upgrade again every product they make will go up to M2, then M3, etc. etc.
Yeah, especially if it could run Linux. This would be a powerful little server.
I decked out my workstation with a 16 core Ryzen & 96GB RAM and it didn't cost anywhere near the price of this new 64GB M1 Max combo. (But it may very well be less powerful, which is astonishing. It would be great to at least have the choice.)
I assume because so far the only devices released with M1 processors have been consumer devices with attached screens, the focus has been on bringing over the full graphical, desktop Linux experience to it, not just headless server linux.
Always wondered who the Mac minis were marketed towards. Sure they're cool but wouldn't you want something portable should you want to go to a coworking location or extend a vacation with a remote work portion of the trip?
Surely in the world of covid remote work is common among developers... Would that mean you'd need a second device like an Air to bring on trips?
In a world of COVID remote work I got tired of my i9 always running loud and hot, and replaced it with a Mac mini which runs at room temperature, is the same or more snappy, since I was almost always running it in clamshell mode hooked up to the two displays anyway. And unplugging and plugging it back in to the TB hub always caused weird states that required reboots maybe 30% of the time.
So now I've got a Mac mini hooked up permanently to those monitors and it just works.
Now I am very tempted to trade in both my Mac mini and i9 for a 16" M1 pro so I can once again return to one machine that isn't always out of sync.
But I'm going to wait to see how well it runs in clamshell mode hooked up to 2 4k displays.
Agreed, and the mini has different I/O that you might prefer (USB-A, 10 gig eth). Also, it’s smaller (surprise, “mini”). Plus, clamshell mode on a MacBook just isn’t the same as a desktop for “always-on” use cases.
This exact question was asked a year ago when the M1 was announced.
In the year since, their laptop market share increased about 2% from 14 to 16%[0].
The reasons for this are:
1. When deciding on a computer, you often have to decide based on use case, software/games used, and what operating system will work best for those use cases. For Windows users, it doesn't matter if you can get similar performance from a Macbook Pro, because you're already shopping Windows PCs.
2. Performance for most use cases has been enough for practically a decade (depending on the use case.) For some things, no amount of performance is "enough" but your workload may still be very OS-dependent. So you probably start with OS X or Windows in mind before you begin.
3. The efficiency that M1/Pro/Max are especially good at are not the only consideration for purchase decisions for hardware. And they are only available in a Macbook / Macbook Pro / Mini. If you want anything else - desktop, dedicated gaming laptop, or any other configuration that isn't covered here, you're still looking at a PC instead of a Mac. If you want to run Linux, you're probably still better off with a PC. If you want OS X, then there is only M1, and Intel/AMD are wholly irrelevant.
4. Many buyers simply do not want to be a part of Apple's closed system.
So for Intel/AMD to suddenly be "behind" still means that years will have to go by while consumers (and especially corporate buyers) shift their purchase decisions and Apple market share grows beyond the 16% they're at now. But performance is not the only thing to consider, and Intel/AMD are not sitting still either. They release improved silicon over time. If you'd asked me a year ago, I'd say "do not buy anything Intel" but their 2021 releases are perfectly fine, even if not class-leading. AMD's product line has improved drastically over the past 4 years, and are easy to recommend for many use cases. Their Zen 4 announcement may also be on the 5nm TSMC node, and could be within the ballpark of M1 Pro/Max for performance/efficiency, but available to the larger PC marketplace.
1) In the pro market (audio, video, 3d, etc) performance is very relevant.
2) Battery time is important to all types of laptop users.
3) Apple is certainly working on more desktop alternatives.
4) You don't need move all your devices into the closed ecosystem just because you use a Mac. Also, some people just don't want to use macOS on principle, but I'm guessing this is a minority.
> AMD's product line has improved drastically over the past 4 years
My desktop Windows PC has a 3700X which was very impressive at the time, but it is roughly similar in perf to the "low end" M1 aimed at casual users.
> Their Zen 4 announcement may also be on the 5nm TSMC node, and could be within the ballpark of M1 Pro/Max for performance/efficiency, but available to the larger PC marketplace.
In the pro market especially you have people who are stuck using some enterprise software that is only developed for PC like a few Autodesk programs. If you are into gaming many first party developers don't even bother making a mac port. The new call of duty and battlefield games are on every platform but switch and mac OS, and that's increasingly par for the course for this industry since mac laptops have been junk to game on for so long.
My counterpoint is that for the pro market, portability and power efficiency are not always that important. Yes, for plenty of people. But many pros are sitting at a desk all day and don’t need to move their computer around.
For these users, you’re not comparing M1 max to laptop CPUs/GPUs, but to the flagship AMD/Intel CPUs. Based on early results from the M1 max on geekbench, the 11900K and 5950X are still better. And the best GPUS for pro are absolutely still significantly more powerful than M1 Max.
This makes sense, because you have to dedicate a lot of power to desktop PCs, which you just can’t do in a laptop. But I think the pro question is still often “what gives me the most performance regardless of power usage,” and the answer is still a custom built computer with the latest high-end parts from Intel/amd/Nvidia, not apple.
Obviously Apple is basically offering the best performance hands down in a portable form factor. But Apple also isn’t releasing parts like the 3090 which draws like 400W, so it’s not yet competing for super high end performance.
Point being, I think Intel and AMD aren’t really left in the dust yet.
I think the big thing to remember is that "performance crown" at any moment in time does not have a massive instantaneous effect on the purchasing habits across the market.
I have no doubt that Apple will continue to grow their market share here. But the people that continue to buy PC will not expect ARM-based chips unless someone (whether Intel, AMD, Qualcomm or someone else) builds those chips and they are competitive with x86. And x86 chips are not suddenly "so bad" (read: obsolete) that no one will consider buying them.
> My desktop Windows PC has a 3700X which was very impressive at the time, but it is roughly similar in perf to the "low end" M1 aimed at casual users.
Still, your desktop can play AAA games at 120Hz with an appropriate GPU attached. No M1 device can do that. So once more, performance doesn't mean anything if you can't do what you want with it.
These are unit market share numbers, so will include large numbers of PCs - both consumer and corporate - at price points where Apple isn't interested in competing because the margins are probably too low.
I suspect by value their share is far higher and their % of profits is even bigger.
The strategy is very clear and it's the same as the iPhone. Dominate the high end and capture all the profits. Of course gaming is an exception to this.
The bad news for Intel is that they make their margins on high end CPUs too.
For Intel and AMD there are two different questions: will Intel fix their process technology, and will AMD get access to TSMC's leading nodes in the volumes needed to make a difference to their market share?
Laptops are only small component of biz for CPU chip manufacturers like AMD/Intel. AMD is traditionally weak in laptops and has not decent market share ever. This doesn't impact their business that much (Intel's numbers are not down that much after loosing Apple's deal after all)
AMD and especially Intel have high margin server CPU business, Apple's entire value prop is low power segment, their chips are not designed to compete in high power category and they will never sell outside their products offerings as only chips . AMD also does custom chip stuff like with PlayStation 5 etc, none of that is threatened by Apple.
Servers Chips with ECC support, enterprise features and other typically high end chips have very high profit /unit lot more than even Apple can make per chip( maybe higher % for Apple, but not absolute $ / chip). Apple is a minor player in the general CPU business.
There will be of course pressure from OEMs who stand to loose sales to Apple to step up their game, AMD/Intel are not loosing sleep over this in terms of revenue/margin yet.
Sure Intel have a server business but that’s smaller than client computing and revenues there are falling.
I don’t know what the precise % is but if Apple have 8% market share by volume their % of Intel’s client business by value is much higher. Losing a growing customer that represents that much of your business is not a trivial loss.
Of course this is all part of a bigger picture where falling behind TSMC enables a range of competitors both on servers and clients. If they don’t fix their process issues - and they may well under PG - then this will only get worse.
The Client Computing group is larger yes, however few things to keep in mind
1. they don't split revenue for Laptop market alone , So hard to say the impact of laptops (especially Apple) itself on their revenue or margins.
2. Also CCG is much slower growing than the Date Centric Group(DCG) business for Intel in the last 4-5 years as to be expected to be in the future as well.
3. The Apple deal was likely their lowest margin large deal(perhaps even loosing money ). Apple is not known for being generous to suppliers and also Intel was in not in any position of strength to ask great margins in the years leading up to Apple Silicon, the delayed processors and poor performance and threat of Apple Silicon had to have impact on pricing in the deal and therefore their margins.
Not saying that Intel don't have a lot to fix, but it also not that suddenly they are in much worse position than say last year.
Sorry, disagree on all of these points. Intel has new competition in all its highest margin businesses. It’s not going bust and may well turn things around but if you look at the PE ratio it tells you the market is pretty pessimistic vs its competitors.
I get (and respect) that you disagree with my argument. However only point 3 is inference/opinion that we can argue on,
1. and 2. are just facts from their annual report, maybe there is scope to argue that they are not relevant here or doesn't show the full picture etc, or are you are saying the facts are wrong ?
On 1. and 2. I’m (hopefully respectfully too) disagreeing with the thrust of the point.
1. We don’t know the precise split but we do know laptops are a very major part of their CCG business (based on laptops having majority of PC market share).
2. DCG revenue was down last quarter and the business is facing major competition from both AMD and Arm so I don’t think we can base expectations on performance over the last five years.
>Anyone can comment on what Intel and AMD are going to do now?
In the short term, nothing. But it isn't like Apple will magically make all PC user switch to Mac.
Right now Intel will need to catch up with Foundry first. AMD needs to work their way into partnering with many people with GPU IP which is their unique advantage. Both are currently well under way and are sensible path forward. Both CEOs knows what they are doing. I rarely praise any CEOs, but Pat and Dr Lisa are good.
Intel is releasing heterogeneous chips in less than a month. Both Intel and AMD bet heavily on 3d stacking and it should hold Apple off desktop. A lot of Apple advantage is due to node advantage and I expect that AMD will start getting latest nodes much sooner than now, 5nm ryzen should be able to compete with m2, Intel's own nodes should get better in few years relatively to latest TSMC nodes(they are not that much behind). But x86 will surely loose market share in the coming decade.
These things came at a great time for me. My late-2014 MBP just received its last major OS upgrade (Big Sur), so I'm officially in unsupported waters now. I was getting concerned in that era from 2015-2019 with all the bad decisions (butterfly keyboard, no I/O, touchbar, graphics issues, etc.) but this new generation of MacBooks seems to have resolved all my points of concern.
On the other hand, my late-2014 model is still performing... fine? It gets a bit bogged down running something moderately intensive like a JetBrains IDE (which is my editor of choice), or when I recently used it to play a Jack Box Party Pack with friends, but for most things it's pretty serviceable. I got it before starting university, it carried me all the way to getting my bachelor's degree last year, and it's still trucking along just fine. Definitely one of my better purchases I've made.
On the hardware side, you could open it up, clean the fan, re-apply thermal paste (after so many years, this will make a big difference) and maybe even upgrade the SSD if you feel like it.
That way, this laptop can easily survive another 1-3 years depending on your use-cases.
I actually ended up pulling the trigger on the base model 14 inch that was announced today, but I'll probably still keep this laptop around as a tinkering device in the future. If not only because it's still fairly capable, but I've got some good nostalgia for it!
I've used it since 2017 and it hasn't changed much since then. I guess they recently added some online code pairing feature I don't plan on using, but that's all that comes to mind.
There appears to be a sea change in RAM (on a Macbook) and its affect on the price. I remember I bought a Mac Book pro back in 2009, and while the upgrade to 4gb was $200, the upgrade to 8gb was $1000 IIRC! Whereas the upgrade from 32GB to 64GB was only $400 here.
Back then, more memory required higher density chips, and these were just vastly more expensive. It looks like the M1 Max simply adds more memory controllers, so that the 64GB doesn't need rarer, higher priced, higher density chips, it just has twice as many of the normal ones.
This is something that very high and laptops do: have four slots for memory rather than two. It's great that Apple is doing this too. And while they aren't user replaceable (no 128Gb upgrade for me), they are not just more memory on the same channel either: the Max has 400GB/s compared to the Pro 200Gb/s.
Apple has always been so stupid about RAM pricing. I miss the days where you could triple your base RAM by spending maybe $75 on sticks from crucial and thirty seconds of effort.
Compared to the M1, they kept the number of memory channels the same but increased the width of each channel. What are the performance implications of this from a pure CPU workload standpoint?
The good news is that AMD is working their butt off on this, and seems to be much closer to Apple in terms of Watts/unit of performance. Intel needs to get in gear, now.
Intel was stuck on the same node for 6 years. Alder lake looks very promising, Alchemist GPUs same. They will have CPU performance crown on laptops in less than 6 months. Power usage will be much better than now. Their 3d stacking strategy is very promising, it will allow for many very interesting SKUs. I wouldn't count them out.
Kind of surprising to me they’re not making more effort towards game support - maybe someone can explain what the barriers are towards mac support - is it lack of shared libraries, x64 only, sheer number of compute cores?
When I see the spec sheet and “16x graphics improvement” I go okay what could it handle in terms of game rendering? Is it really only for video production and GPU compute tasks?
I've answered this a few times, so I'll just give a bog-standard answer here.
Apple had really good game support 3 years ago. Mojave was really among the best when it had Valve Proton natively supported, and it was starting to seem like there might be some level of gaming parity across MacOS, Linux and Windows. Apple broke compatibility with 32-bit libraries in Catalina though, which completely nixed a good 95% of the games that "just worked" on Mojave. Adding insult to injury, they burned their bridges with OpenGL and Vulkan very early on, which made it impossible to implement the most important parts of Proton, like DXVK and CL-piping.
So, is it possible to get these games running on MacOS? Maybe. The amount of work that needs to be done behind-the-scenes is monumental though, and it would take one hell of a software solution to make it happen. Apple's only option at this point is HLE of x86, which is... pretty goddamn terrible, even on a brand-spanking new M1 Max. The performance cores just don't have the overhead in clock speed to dynamically recompile a complete modern operating system and game alongside it.
The gaming capability is there, but Apple only officially supports their own Metal on macOS as far as graphics languages, meaning the only devs with experience with Metal are the World of Warcraft team at Blizzard and mobile gaming studios. MoltenVK exists to help port Vulkan based games, but generally it's rough going at the moment. I'm personally hoping that the volume of M1 Macs Apple has been selling will cause devs to support Macs.
As a former Starcraft 2 player I’d say: don’t count on it. It wasn’t even worth it for Blizzard to make SC2 that performant on Vulkan. They had a graphics option for it, but it was buggy compared to the OpenGL option. When a company that size doesn’t want to commit the dev resources, that leaves little hope for the smaller companies.
They never before had good graphics in mainstream products. And there's no official support for any of the industry standard API(to be fair there is MoltenVK, but not much traction yet), yes, there is support for Metal in UE4/5 and Unity, but AAA games use custom engines and cost/benefit analysis didn't make much sense, maybe now it will change.
Have you seen GPU prices lately? Desktop 3070 level performance in portable laptop for 3500$ is not that bad of a deal. If they make Mac Mini around 2500$ it would be pretty competitive in PC space.
Gaming market spends the most of any home electronics user except ultra rich mansion hi-fi outfitters. Entry gaming laptops are $1300, entry GPUs are $400 if you can find them.
Not competing against consoles, but against big rig gaming PCs, and products from Asus and Razor, and companies like Nvidia/Amd in compute.
Does anyone have a handle on how the new M1X is expected to perform on Deep Learning training runs vs a NVIDIA 1080Ti / 2080Ti. I think the 400 Gbps bandwidth and 64 GB unified memory will help - but can anyone extrapolate based on the M1 ?
Seems reasonable. They still sell the Intel Mac mini, despite having an M1 powered Mac mini already. The Intel one uses the old “black aluminum means pro” design language of the iMac Pro. Feels like they are keeping it as a placeholder in their line up and will end up with two Apple silicon powered Mac mini’s, One consumer and one pro
I doubt that we'll see a really powerful Mac Mini anytime soon. Why? Because it would cannibalize the MacBook Pro when combined with an iPad (sidecar).
Most professionals needing pro laptops use the portability to move between studio environments or sets (e.g home and work). The Mini is still portable enough to be carried in a backpack, and the iPad can do enough on it's own to be viable for lighter coffee shop based work.
Not many would do highend production work outside a studio or set without additional periphery, meaning that highend performance of the new MBP isn't needed for very mobile situations.
A powerful mini and an iPad would therefore be the much better logical choice vs. a highend MacBook Pro. There where you need the power there's most likely a power outlet and room for a Mini.
For myself, my iPad Pro actually covers all use cases that I would need a laptop for, so my current Macbook Pro is just in clamshell mode on my desk 100% of the time. That's why I would love to replace it with an M1 Max Mac Mini.
iPad + keyboard has a battery and can be enough for most tasks you'd do outside an office, set or studio environment when away from your other pro periphery. You wouldn't cut movies on the trackpad within a coffe shop if you're used to doing it with a mouse and other tools inside a studio. That's what I mean.
There were rumors that it was supposed to be today. Given that it wasn't, I now expect it to be quite a while before they do. I was really looking forward to it.
Their comparison charts showed the performance of mobile GPUs, not the desktop ones. So, I wouldn’t call this “practical”. Most likely depends on what kind of models you are building and what software you use and how optimized it is for M1.
It will be definitely handy for finetuning large models due to huge ram, but in training from scratch 3090 is certainly better. They are seemingly cooking 128 core GPU, if they release this kraken it will beat 3090 in pretty much everything.
I wish that we could compare Intel/AMD at the 5nm process to these chips, to see how much speedup is the architecture vs the process node.
Also, all of the benchmarks based on compiling code for the native platform are misleading, as x86 targets often take longer to compile for (as they have more optimization passes implemented).
I think memory is part of that, whereas it would be excluded for the other chips you mentioned.
But OTOH 57B transistors for 64GB of memory means there would be less than one transistors per byte of memory - so I'm not sure how this works, but I'm not too knowledgeable in chip design.
I don't think the 'old' M1 13" Pro is long for this world - for £100 more (16GB+1TB spec) you get a much better machine in the 14" model. But independent comparisons will follow.
I'd love to see a Pro/Max Mac Mini, but that's not likely to happen.
My guess is it will work like using a 65W adapter on a Mac that prefers 95W. It will charge if you're not doing much but it will drain the battery if you're going full tilt.
Just like USB-C cables can differ in capacity I’m finding a need to scrutinize my wall chargers more now. A bunch of white boxes but some can keep my MacBook Pro charged even when on all day calls and some can’t. With PD the size isn’t a great indicator anymore either.
Instead of having to replace the mag safe power brick for 85€ you can now just replace the cable for 55€.
However, it my personal experience, I've never had the cable fail, but I've had 2 mag safe power supplies fail (they started getting very hot while charging and at some point stopped working alltogether).
I think the 140w power adapter is to support fast charging, they mentioned charging to 50% capacity in 30 mins, I'd imagine power draw should be much less than 140w
Yeah, I don't know the peak power draw when everything's on 100% load, but I think (also thinking the efficiency of M1) 140W would keep charging even under highest load.
Indeed. The M1 Max can drive 3 6K monitors and a 4K TV all at once. Why? Professional film editors and color graders. You can be working on the three 6K monitors and then Output your render to the 4K TV simultaneously
I’m assuming that running multiple 5-6k monitors will still require multiple cables/ports though. One thunderbolt port per monitor.
I’m still waiting for the day we can hook 2 5k monitors + peripherals up to a thunderbolt dock and then hook that up to a MacBook using a single cable.
MST is not supported and won't be supported, that's been Apple policy for some time now. Only Thunderbolt tunneling is supported for multiple screens on one output (which logically provides separate DP outputs, no MST)
- M1 Max SoC was highlighted (Connectivity: Display Support -> 33:50):
- Supports three XDR Pro Displays and a 4k television simultaneously
- "75 Million pixels of real estate."
- Highlighted still having free ports with this setup.
It's their marketing term for their HDR tech. The XDR displays are their flagship media productivity monitors. The new screens in the 14/16 inch MacBooks have "XDR" tech as well.
Just so I don't misunderstand, does that mean I need XDR or will it work with any monitor? I was very surprised to see that the original M1 only supported one external monitor so just want to confirm before I press buy.
It will of course work with other monitors, but there aren't that many high res monitors out there. The Apple display is the only 6k monitor I know of. There are a few 5k monitors and lots of 4k monitors.
There's one 8k monitor from Dell, but I don't think it's supported by macOS yet.
It’s interesting that the M1 Max is similar in GPU performance to RTX 3080. A sub $1000 Mac Mini would end up being the best gaming PC you could buy, at less than half the price of an equivalent windows machine.
While not that big of difference, the 14" is a little cheaper:
* The 14" MBP with M1 Max 10-core CPU, 24-core GPU, 32gb memory is $2,900
* The 14" MBP with M1 Max 10-core CPU, 32-core GPU, 32gb memory is $3,100.
The Mac mini with the Max variant chip will certainly be more than $1,000. But I expect it will be more reasonable than the MBPs, maybe $2,100 for the 32-core GPU version with 32gb of memory. That's how much the currently available Intel 6-core i7 Mac mini with 32gb of memory and 1TB of storage costs.
The point being that the cheapest 32-core GPU is $3,100, for competing with an RTX 3080 mobile that's constrained to about 105W (before it starts to pull ahead of the 32-core GPU in the M1 Max).
Overall it's just a silly premise that a sub-$1000 Mac Mini "would end up being the best gaming PC you could buy." That comment speaks to either not knowing the pricing structure here, or misunderstanding the performance comparisons.
A mid-to-low end desktop GPU pulls closer to 150-200W, and is not part of the comparisons here. And as Apple increases the performance of their chips, they also increase the price. So unless they start having 3 chips with the cheapest one being less than $1000 and massively ahead of desktop GPUs while pulling less than 50W, it's not going to happen. I don't see it happening in the next 5 years, and meanwhile Nvidia and AMD will continue their roadmap of releasing more powerful GPUs.
400 GB/s is insane memory bandwidth. I think a m5.24xlarge for instance has something around 250 GB/s (hard to find exact number). Curious if anyone knows more details about how this compares.
I think Anandtech showed a M1 P-core could max out 50+GB/s on it's own, so 8 P-cores alone likely can use 400GB/s. With both the CPU+GPU, they'd be sharing the BW. A 3060/6700XT have ~360-384 GB/s, so game benchmarks should be interesting to see at high res.
Additionally, do the 24-core GPU and 32/64 GB RAM variants of the M1 Max all have 4 128-bit memory controllers enabled? The slide seems to just say 400GB/s (Not "up to"), so probably all Max variants will have this BW available.
The value is in the power efficiency here (N5P?+), and if you can afford a $3k+ laptop.
> However, Apple is unique in putting emphasis in keeping hardware interfaces compatible across SoC generations – the UART hardware in the M1 dates back to the original iPhone! This means we are in a unique position to be able to try writing drivers that will not only work for the M1, but may work –unchanged– on future chips as well. This is a very exciting opportunity in the ARM64 world. We won’t know until Apple releases the M1X/M2, but if we succeed in making enough drivers forwards-compatible to boot Linux on newer chips, that will make things like booting older distro installers possible on newer hardware. That is something people take for granted on x86, but it’s usually impossible in the embedded world – and we hope we can change that on these machines.
There is technically no reason Apple could not introduce a cloud computing service based on their silicon at some point. But would it generate the kind of profit margins they need? An interesting space to watch.
I was having a vision last night of a server rack filled with iPad Pros /w Ethernet through the USB-C. I still wonder what the performance per mm^3 would be in comparison to a traditional server.
Lots of junk comments, but I guess that happens with Apple announcements. Laptops seem impressive to me, I want to see the real world use metrics. Pushing hard on the performance per watt type metric and no doubt they have a lot of power and use less power. Seems like they listened to the outcry of people regarding the Touch Bar and more ports. Seems like this should sell well.
Seems they may have finally noticed the hit from a decent number of the pro's using their products migrating to different platforms, and realized they needed to take a few steps back on the more radical innovations to put out a solid working machine. Hell I haven't wanted an Apple machine since the early days of the unibody when other manufacturers started releasing the same form-factor. This has me considering one for my next development machine depending on the price premium over the competition.
Agreed these counts look high - but the fact that they can slot this into a $3,499 laptop is remarkable and must say something about the cost effectiveness of TSMC 5nm process.
The CPU may not have a 32 core GPU inside it but that doesn't stop you from adding the 2 numbers together and seeing it's still significantly less than 57 billion.
Would be very curious to see what took all of the die space. Neural engine? Those additional video encoding engines? I doubt we'll get to know unfortunately.
They share the core design, icestorm for efficiency cores and firestorm for performance cores, but these are recombined into entirely different systems on chip. To say the m1 max is the same as a14 is like saying a xeon is the same as an i3 because they both have skylake-derived cores.
The differences between the A14 and A15 are so small it doesn't matter. I suspect the CPU/GPU cores come from the A14 but the ProRes accelerator comes from the A15.
>The differences between the A14 and A15 are so small it doesn't matter.
The testing shows increases in performance and power efficiency.
>Apple A15 performance cores are extremely impressive here – usually increases in performance always come with some sort of deficit in efficiency, or at least flat efficiency. Apple here instead has managed to reduce power whilst increasing performance, meaning energy efficiency is improved by 17% on the peak performance states versus the A14. If we had been able to measure both SoCs at the same performance level, this efficiency advantage of the A15 would grow even larger. In our initial coverage of Apple’s announcement, we theorised that the company might possibly invested into energy efficiency rather than performance increases this year, and I’m glad to see that seemingly this is exactly what has happened, explaining some of the more conservative (at least for Apple) performance improvements.
On an adjacent note, with a score of 7.28 in the integer suite, Apple’s A15 P-core is on equal footing with AMD’s Zen3-based Ryzen 5950X with a score of 7.29, and ahead of M1 with a score of 6.66.
So if I want a new Macbook purely for software development and building mobile apps, what should I pick between the $2499 14" and the $3499 16"? Doesn't look like there's any difference in Xcode build times from their website
14" + M1 Max (24 GPU cores) with 32 Gb Ram is the sweet spot imho. It costs a bit more but you get twice the memory bandwidth and double the ram, which will always prove handy.
I develop iOS apps and I think this is the sweet spot. I am not sure what impact the extra bandwidth of the M1 Max will have though. We will have to wait to see. For video editing is clear. For Xcode not so sure.
14 or 16 inches is up to personal preference. I just value more the smaller package and the reduced weight. Performance is about the same.
Is it disclosed anywhere what the bandwidth the CPU complex has to memory? There's the overall bandwidth to memory, which was probably made so high to feed the GPU, but can the CPUs together actually drive 200 or 400 GB/s to memory?
If they can, that's an absolutely insane amount of bandwidth. You can only get ~200 GB/s on an AMD EPYC or Threadripper Pro CPU with 8 channels of DDR4-3200, so here we have a freakin' LAPTOP with as much or even double the bandwidth of the fastest and hungriest workstation CPUs on the market.
Excited to see what a future Apple Silicon Mac Pro looks like and makes me quite envious as someone who is stuck in the x86 world.
I’m waiting on Apple’s final decision on the CSAM scanner before I buy any more hardware from them. These processors look cool, but I don’t think they’re worth the Apple premium if they’re also spying for law enforcement.
What is it about the M1 architecture that makes it so speedy compared to x86 chips? Is it the Risc instruction set? The newer node process? Something else?
RISC is a big part of it, and enabled the biggest part, which is Apple got to design their own SoC. And they have their own OS, so they can cause a complete paradigm shift without having to wait for the OS company/collective to come around.
Oh, them's fighting words =) I wrote ARM assembly before I wrote MIPS, so for me, MIPS looks like ARM. ARM was influenced by Berkeley RISC. MIPS came out of Stanford. ARM and MIPS CPUs were both introduced in 1985. And MIPS is dead, whereas ARM is doing slightly better, so, statistically, if you show a RISC assembly programmer MIPS code, they'll probably say "that looks like ARM".
Now there are two approaches to the performance argument.
First, I will argue that RISC processors provide better performance than CISC processors.
Second, the counter argument to that is that, actually, no, modern RISC processors are just as Complex as CISC processors, and M1 is faster simply because Apple. My second argument is that Apple choose ARM because of RISC. So even if it were true, now, that one could build a CISC that is just as performant as a RISC, the fact that right now, the most performant chip is a RISC is ultimately because it is a RISC.
Do RISC processors provide better performance than CISC? The #1 supercomputer in the world uses ARM. AWS's Graviton offers better price and performance than AWS Intel. M1 is faster power/performance than any x86. ARM holds all the records. But it's just a coincidence?
I think PP's position is that CISC or RISC doesn't matter. One argument I've heard is that its the 5nm production node that matters, and that CISC or RISC, it's all the same nowadays.
So how is Apple on 5nm? Why are the CISC manufacturers stuck on 7nm (or failing to get even there)? When the Acorn Computer team were looking for a new processor, they were inspired by the UC Berkeley and their RISC architecture. In particular, the were inspired by the fact that students were designing processors. They decided that they, too, could build a new CPU if they used RISC, and that was the Acorn RISC Machine. The ARM. I do not believe that RISC and CISC are "basically the same" when it comes to processor design. The fact that Intel is still stuck on 10nm(?) must in part be due to the thing being CISC. One might argue that it's because they made a particularly complicated CISC, but that would only make my point for me. I don't think that "only the instruction decoder is simpler so it doesn't make much difference" holds any water. I would love to hear from actual CPU designers who have worked on M1 or Graviton, but if they said "RISC is easier" then they would be dismissed as biased.
But let's suppose that no, actually, the geniuses at Apple could equally create a CISC CPU that would be just as performant, I'd still argue that the success is because of RISC. M1 is the most recent of a long line of ARM CPUs. They are ARM CPUs because the first in that long line needed battery life - the Newton. M1 is ARM because ARM was RISC when RISC mattered. You may argue that RISC doesn't provide a benefit over CISC now, but it certainly did then.
And how does Apple have such geniuses? Again, this is largely because the iPhone was perhaps the largest technological step-change in history. They have the market cap they do because of iPhone, and they have iPhone because RISC. So even if the argument is "Well, the M1 is fast because Apple has lots of money" well that is because of RISC.
But I still think that its easier to build a faster CPU with RISC, and I expect the first RISC Mac Pro will prove me right. At which point, RISC will own performance/watt, performance/price, #1 supercomputer, and, at last, fastest desktop.
They are using the m1 pro to get their battery claim numbers.
I ordered an M1 Pro based on the slightly lower price and my assumption that it will be less power hungry. If it is only 200 dollars cheaper why else would they even offer the M1 Pro? The extra performance of he max seems like overkill for my needs so if it has worse power consumption I don't want it. I could probably get away with an M1 but I need the 16" screen.
We will find out in a few weeks when there are benchmarks posted by 3rd party reviewers, but by that time who knows how long it will take to order one.
There are two kinds of offsets: 1. existing offsets where you buy them but there is no net offset creation, 2. newly created offsets where your purchase makes a net difference. An example of (1) could be buying an existing forest that wasn’t going to be felled. An example of (2) could be converting a coal electricity plant to capture CO2 instead of shutting the plant down.
A quick skim of the Apple marketing blurb at least implies they are trying to create new offsets e.g. “Over 80 percent of the renewable energy that Apple sources comes from projects that Apple created”, and “Apple is supporting the development of the first-ever direct carbon-free aluminium smelting process through investments and collaboration with two of its aluminium suppliers.” — https://www.apple.com/nz/newsroom/2020/07/apple-commits-to-b...
Has anyone tried extremely graphically intense gaming on these yet, I actually would love to consolidate all of my computer usage to a single machine, but it would need to handle everything I need it to do. $2000 for a laptop that can replace my desktop, is not a bad deal. Although that said I’m in no rush here.
Gaming is a non-starter on MacOS since 2018. You can get certain, specific titles working if you pour your heart and soul into it, but it's nothing like the current experience on Linux/Windows unfortunately.
Hard pass then. I already have a pretty fast Windows laptop. The only real issue is it sounds like an helicopter underload. ( It also thermal throttles hard ).
I'm switching now, after waiting for the M1 16' one for more than a year now.
However, my current laptop is a 2015 MacBook, I've never had any issues with it when it comes to coding. If anyone here's switching and you don't do anything like 3D/video editing, I'm curious what's your reason?
Prediction for Mac Pros and iMac Pros: several SoCs on the mainboard, interconnected with a new bus, 16 CPU cores for each SoC, 4 SoCs max. The on SoC RAM will act as a L4 Cache and they will share normal, User replaceable DDR5 RAM for „unified“ access.
Looks like Naples. It seems to not easy to treat NUMA especially as a personal computer. So I wondered whether Apple uses Chiplet approach for M1X, but seems not.
It's a bit concerning that the new chips have special purpose video codec hardware. I hope this trend doesn't continue, requiring laptops from different manufacturers to play different video formats or at least with a non-degraded quality.
Well, they've been features of special ASICs that just happen to be on the GPU. Video decoding is not really suitable for CPUs (especially since H.264 or so) but it's even less suitable for GPGPU.
Apple has an architecture license from ARM, so they're allowed to create their own ARM-compatible cores. They do not license any core designs from ARM (like the Cortex A57), they design those in house instead.
Apple uses the Arm Aarch64 instruction set with Apple silicon which is Arm IP. I don't believe that any Apple silicon uses any other Arm IP but who really knows since it is likely Apple would have written any contracts to prevent disclosure.
Apple had a choice: many CPU cores or a bigger GPU. They went with a much bigger GPU. Makes sense: most software is not designed to run on a large number of CPU cores, but GPU software is massively parallel by design.
Doing stuff on two different windows just became a bit clumsier every time e.g. code reviews. I can imagine manually resizing windows when in a tiling mode.
Doubtful, because those purchases are often driven by software reasons, and there Apple loses heavily (whether it's corporate manageability of the laptops, or access to special software which rarely has Mac versions)
These chips are impressive, but TBH I have always been annoyed by these vague cartoon-line graphs. Like, is this measured data. No? Just some market doodle? Please don't make graphs meaningless marketing gags. I mean, please don't make graphs even more meaningless marketing gags.
Apple had SteamVR support for a while, and even Valve Proton support for a while (meaning that Windows-only games were quite playable on Mac hardware). Unfortunately, Apple pulled 32-bit library support without offering a suitable alternative, so Valve was forced to scrap their plans for Mac gaming entirely.
They could have been optimizing for lower power consumption rather than more compute power. For example, the next iPhone chip will likely not be the most powerful when it comes to compute, even if it beats the other iPhone chips.
Maybe currently, but they are only on their second generation of laptop chips.
I guess going forward the current A-series chip will be lower power/performance than any reasonably recent M-series chip (given the power envelope difference).
Ah yes, the naming. Instead of M2 we got M1 Pro & M1 Max. I'm waiting for M1 Ultra+ Turbo 5G Mimetic-Resolution-Cartridge-View-Motherboard-Easy-To-Install-Upgrade for Infernatron/InterLace TP Systems for Home, Office or Mobile [sic]
I am not interested in Apple's ecosystem. While I stay with X86 I wonder if and when AMD and Intel will catch up. Or if another ARM chip maker will release a chip as good but without tying it to a proprietary system.
No one will bother to make an ARM chip for "PCs" if there is no OS to run on it. MS won't fully port Windows to ARM with a Rosetta-like layer unless there is an ARM computer to run it.
Yes, Linux can run on anything, but it won't sell enough chips to make creating a whole new ARM computer line profitable.
MS has a full Windows for arm64 with a Rosetta-like layer already. It's been available for at least two years, although it only supported x86 at first (presumably because of the patents on x86-64). x86-64 was added recently (https://blogs.windows.com/windows-insider/2020/12/10/introdu...)
I’m not super familiar with how hardware works so this might be a stupid question but how different are the tiers of processors for each upgrade and what’s a reasonable use case to choose any of them?
I know Apple has a translating feature called Rosetta. But what about virtual machines? Is it possible to run Windows 10 (not the ARM edition but the regular, full Windows 10) as a virtual machine on top of an Apple M1 chip? It looks like UTM (https://mac.getutm.app/) enables this, although at a performance hit, but I don’t know how well it works in practice. What about Parallels - their website suggests you can run Windows 10 Arm edition but doesn’t make it clear whether you can run x86 versions of operating systems on top of an M1 Mac (see old blog post at https://www.parallels.com/blogs/parallels-desktop-m1/). I would expect that they can run any architecture on top of an ARM processor but with some performance penalty.
I’m trying to figure out if these new MacBook Pros would be an appropriate gift for a CS student entering the workforce. I am worried that common developer tools might not work well or that differences in processors relative to other coworkers may cause issues.
Neither Apple, Microsoft, nor Parallels is planning to support x86-64 Windows on Apple silicon. You can run emulation software like QEMU and it works but it is very slow. UTM uses QEMU.
MacBook announcements are the opposite of slow news days. Consumer tech outlets are literally scrambling to cover every aspect of these things because there is so much interest.
As is typical for apple, the phrasing is somewhat intentionally misleading (like my favorite apple announcement - "introducing the best iphone yet" - as if other companies are going backwards?). The wording is of course carefully chosen to be technically true, but to the average consumer, this might imply that these are more powerful than any CPU apple has ever offered (which of course is not true).
Napkin math based on Ethereum mining, which on original 8 GPU core M1 was about 2MH/s, puts M1 Max GPU performance (8MH/s with 32 cores) to be only 1/4 of mobile 3060, which does over 34MH/s.
So I am extremely skeptical about Apple claims on "comparable" GPU performance to RTX 30xx mobile series. And again, that RTX is still using 7nm.
Great update, I think Apple did the right thing by ignoring developers this time. 70% of their customers are either creatives who rely on proprietary apps, or people who just want a bigger iPhone. Those people will be really happy with this upgrade, but I have to wonder what the other 30% is thinking. It'll be interesting to see how Apple continues to slowly shut out portions of their prosumer market in the interest of making A Better Laptop.
I agree this is very targeted towards the creative market (i.e. SD card port, etc), but I'm curious as a developer what you would have liked to see included that isn't in this release.
I guess personally having better ML training support would be nice, since I suspect these M1 Max chips could be absolute monsters for some model training/fine-tuning workloads. But I can't think of anything design-wise really.
The big ticket items: Hardware-accelerated VSCode cursor animation. Dedicated button for exiting vim. Additional twelve function keys, bringing the total to 24; further, vectorised function key operations, so you can execute up to 12 "nmap <f22> :set background=dark" statements in parallel. Dedicated encode and decode engines that support YAML, TOML and JWT <-> JSON. A neural engine that can approximate polynomial time register allocation almost instantly. The CPU recognises when you are running Autoconf `./configure` checks and fast-forwards to the end.
I would also like a decent LSP implementation for Siri, but the question was about hardware.
Linux has offered the Virtual Function Key System since at least 1998, but there isn't a driver that uses the native 8-lane AVF instruction set on my 230% mechanical keyboard yet.
Honestly a MacOS "Pro Mode" would be great. Let me run the apps I want, not use my mouse, and be able to break things, but keep the non-power user experience smooth and easy.
I certainly don't feel ignored as a developer by this update. The memory capacity and speed bump, and the brighter, higher resolution screen are very significant for me. My M1 Air is already a killer portable development machine because of how cool and quietly (silent) it runs. The 14" looks like a perfect upgrade for me.
Still have not solved the tinytiny inverted Tee arrow key arrangement on the keyboard. Need to improve on the former IBM's 6-key cluster below the right shift key, or arrange full sized key arrows in a crossplus pattern breaking out of the rectangle at the lower right corner.
I am in the market for a new laptop and bit skeptic for M1 Chips. Could anyone please tell me how is this not a
"premium high performance Chromebook" ?
Why should I buy this and not Dell XPS machine if I will be using it for web development/Android Development/C#/DevOps. Might soon mess with machine learning
For web dev it works very well, android development no clue, c# is primarily a windows-oriented development environment so probably not so great. For devops, well.. Mac is a linux-esque environment, it'll be fine?
I have the m1 air and the big advantages are the absurd battery life and the extremely consistent very high performance.
I believe you can get both faster cpu and gpu performance on a laptop, but it costs a lot in battery life and heat which has a bigger impact on usability than I believed before getting this one.
Might want to add, this is my first ever apple laptop. I've always used Lenovo laptops, ran mostly windows/linux dual boots on my laptops and desktops over the years.
That's a debate between OSX and alternatives, the M1 has little to do with that. Except maybe the Rosetta for unsupported x86, but I doubt that'll cause you any issues.
Edit: C# support is there on OSX, Rider has Apple Silicon builds and .net core is cross platform now.
ML is probably a let down and you'll be stuck to CPU sized workloads, that being said the M1 does pretty well compared to other x86 CPU
Since these won't ship in non-Apple products, I don't see really the point. They're only slightly ahead of AMD products when it comes to performance/watt, slightly behind performance/dollar (in an Apples to apples comparison on similarly configured laptops), and that's only because Apple is head of AMD at TSMC for new nodes, not because Apple has any inherent advantage.
I have huge respect for the PA Semi team, but they're basically wasting that talent if Apple only intends to silo their products into an increasingly smaller market. The government really needs to look into splitting Apple up to benefit shareholders and the general public.
> I have huge respect for the PA Semi team, but they're basically wasting that talent if Apple only intends to silo their products into an increasingly smaller market.
They design the SoCs in all iPhones and soon all Macs. They have the backing of a huge company with an unhealthy amount of money, and are free from a lot of constraints that come with having to sell a general-purpose CPUs to OEMs. They can work directly with the OS developers so that whatever fancy thing they put in their chips is used and has a real impact on release or shortly thereafter, and will be used by millions of users. Sounds much more exciting than working on the n-th Core generation at Intel. Look at how long it is taking for mainstream software to take advantage of vector extensions. I can’t see how that is wasting talent.
> The government really needs to look into splitting Apple up to benefit shareholders and the general public.
So that their chip guys become just another boring SoC designer?
Talk about killing the golden goose. Also, fuck the shareholders. The people who should matter are the users, and they seem quite happy with the products. Apple certainly has some unfair practices, but it’s difficult to argue that their CPUs are problematic.
"They're only slightly ahead..." and "The government really needs to look into splitting Apple up to benefit shareholders and the general public." doesn't really seem to jive for me.
If they're only slightly ahead, what's the point of splitting them up when everyone else, in your analysis, is nearly on par or will soon be on par with them?
This is a poorly considered take, no offense to you. I think you're failing to consider that Apple traditionally drives innovation in the computing market, and this will push a lot of other manufacturers to compete with them. AMD is already on the warpath, and Intel just got a massive kick in the pants.
There's other arguments against Apple being as big as it, but this isn't a good one. Tesla being huge and powerful has driven amazing EV innovation, for example, and Apple is in the same position in the computing market.
A lot of Apple fans keep saying Apple drives innovation, but I'd love to see where this has actually been true. Every example I've ever been given has a counter-example where someone else did it first and Apple did not do it in a way that conferred a market advantage; the only thing Apple has proven they're consistently better at is having a PR team that is also a cult.
Sure. Here's the basic pipeline: Somebody makes a cool piece of tech, but they don't have the UI/UX chops to make it work. Apple comes along and works some serious magic on the tech to make it attractive for everyday use. Other companies get their roadmap from Apple's release, and go from there.
Some examples:
Nice fonts on desktops
Smartphones
Tablets
Smartwatches (more debatable, but Apple did play a big part here)
In-house SOCs.
I suspect that their future AR offering is going to work the same way. The market is currently nascent, but Apple will make a market.
ARM going mainstream in powerful personal computers was exciting enough as it was, with the release of the Apple Silicon M1. With time hopefully these will be good to use with Linux.
OSX is less than 1/7th of the desktop OS market, and iOS is slightly over 1/4th of the phone market; the major cloud companies are the largest consumers of desktop and server scale CPUs, and buy mostly AMD with some Intel only when cluster compatibility requirements apply; when it is non-x86, it is a mix of things that do not include any Apple ARM offerings but do include larger scale higher performance ARM CPUs and some POWER as well.
The most used architectures of any kind (including embedded, industrial, and automotive) are MIPS, then ARM, then POWER/PowerPC, then x86. Apple is a tiny player in the overall ARM market, and by hyper-focusing on desktop and phone alone, they are giving up important opportunities to diversify their business.
At no point does "Apple will have plenty of customers" make sense in a context where Apple is a $2.45T company: either they have a large majority of possible customers in multiple industries at multiple levels, or reality is going to come crashing in and drive them back down to sub-$T levels. You cannot convince a company that only has a net income of $22B a year is worth that much, no matter how much "goodwill" and "brand recognition" and other nonsense intangibles they have. Steve Jobs died exactly ten years ago on Oct 5th, and the RDF died with him.
You are forgetting that GNU/Linux desktop will never happen, it is always going to be Windows or macOS for 98% of the world.
Even if we take ChromeOS and Android into account, ChromeOS is largely irrelevant outside North America school system, and Android will always be a phone OS.
In both cases, the Linux kernel is an implementation detail.
So that leaves Apple with its 10% market share for all creatives, which someone has to develop software for, and iOS devices, which also require developers to create said apps.
Edit: to read all the 600+ comments in this thread, click More at the bottom of the page, or like this:
https://news.ycombinator.com/item?id=28908031&p=2
https://news.ycombinator.com/item?id=28908031&p=3