and manufacturer propaganda. Tomshardware made somewhat famous burning Athlon video (later revealed they used board with disabled/non functioning thermal cutout) during prime Intel pentium 4 payola timeframe.
https://www.tomshardware.com/reviews/hot-spot,365-4.html "AMD did not bless the Thunderbird core with ANY thermal protection whatsoever."
In reality AMD socket certification required thermal cutout, same as Intel for Pentium 3. AMD processors do include thermal diode just like Intel ones.
"Intel's older processor is also equipped with a thermal diode and a thermal monitoring unit"
is a lie. Pentium 3 thermal throttling is performed by the Bios just like in Athlon case. Serendipitously tomshardware picked broken Siemens motherboard for AMD system as recommended by Intel, imagine that.
They published a non-retraction catching Siemens in a lie and showing proper AMD setup safely shutting down. Of course you cant read it because its burried down, excluded from wayback machine and deadlinked http://www.tomshardware.com/column/01q4/011029/index.html
Anandtech wasnt better 20 years ago. They used to publish rave Pentium 3 SSE reviews https://images.anandtech.com/old/cpu/intel-pentium3/Image94.... using a piece of Intel commissioned software as an independent test. https://www.vogons.org/viewtopic.php?f=46&t=65247&start=20#p...
Still at 5nm; double the GPU cores. Reports up to 3 displays, so it's safe to assume 2+1 external/internal. 8 High perf cores and 4 little perf cores. PCIE gen 4, no idea about number of lanes.
I am finding the numbers a little bit too perfect to put much faith into. Single core performance that is exactly identical between the two? Floating point performance exactly double? Seems a bit off.
I'm pretty sure Apple knows what machines it sells, including to video and audio pros, and knows how many of them buy the 32GB options where they're available (in Intel).
Besides "Pro" is just a branding, not a class of machines designed for specific tasks with any specific guarantees.
Up until now, "pros" did just fine with 16GB or up to 32GB max on their Intel Mac laptops, so I'm not sure how "twice as fast" machines, at lower price, with the same 32GB RAM (but much more efficient use of it, closer to the CPU) will suddenly be a problem.
Especially if hardly any buy PC laptops with 64GB of RAM anyway -- and Apple doesn't, and has never, competed with custom tower PC rig makers, but to people looking at the sweet spot of portability, power, battery life, size, weight.
I agree, and to be fair, there are plenty of professional use cases that are totally serviceable with one external monitor and 16GB of RAM. Not every professional is a YouTuber or big data engineer.
I can't speak for anyone else, but I do like it when the name of a product actually means something, because it gives me confidence in the honesty of the vendor.
With NVMe, I'd rather have 16GB LPDDR4X than 64GB low-JEDEC, high latency DDR3 memory. More RAM is still good in many cases, but it's not vital like it was 15 years ago when you'd end up in swap hell or OOMing constantly over a GB or two.
I'm not quite sure where why people seem to be clamoring for more RAM, it seems like a non issue these days. I don't really game these days, and when I do it's on older "steam sale" titles so maybe gamers need it, but even for dev stuff, I just don't see the issue.
It's not my job to help people pick out computers better suited to do the work they were hired for, but I really enjoy seeing the characteristics of and where the bottlenecks are in new generations of hardware.
People seem to be under the mistaken impression that supporting higher memory capacities is hard. It's not. It's trivial. The only reason the M1 doesn't is because of how it's packaged, and they only chose to package it that way because the products they were intending to replace it with only need 2 memory chips. They did what made sense for the M1, and they will do what makes sense for the M1x.
Do we have perf counters that show how memory starved is M1 under pressure?
Granted you'll feel bad if this leak is wrong and Apple shows an M1x with more than 16GB of RAM.
From my experience with both, I think you might be very unhappy in retrospect if you bet that more RAM is going to make the most recent Intel one come within shouting distance of even an 8GB M1 Mini. The old one was one of the best for single-core performance within its generation, but they're not even the same thing, not even close. Again, I own both machines with the Intel Mini maxed out as far as it can go with RAM.
My gut level reaction is, you could give the new one lots more RAM (in theory) and it wouldn't end up using it. So they're capping them, otherwise people will assume there's a benefit and that the 8GB and 16GB ones aren't workable for heavy processing.
I've also got a suspicion (possibly just a weird cynicism) that this is why the previous-gen Mac Pro runs so huge and expensive: they've overpriced and overdesigned it so hard that it's unattainable for nearly anybody, making it a sort of status object with Veblen pricing, specifically to avoid class action lawsuits when they release newer machines that outperform it at a tenth of the price.
You were never supposed to actually buy the monstro cheese grater Mac Pro or its screen with the thousand dollar stand, and if you did, (a) it's of symbolic value to you and (b) it's your own fault. When the real high performance computer comes out of Apple it'll be at Mac Mini prices, and meant to iPhone-ify desktop computing as a category.
They just started the Apple Silicon transition. A bit early for the doom and gloom, no?
I started computing in the days when you could add your own instructions to your computer. I’m not suing anyone though that capability is no longer available
Cover:Schematics for a 5V, 3A regulated power supply and a
1K x 8 read/write memory block. The power supply and three
such memory blocks can be added to the basic KIM-l microcomputer
to provide the 4K RAM required by this assembler. Parts are
available from Jameco Electronics.
It was a pleasant shock. I ended up returning it as it turns out I really do need at least 32GB of RAM, but it was VERY difficult to take it back.
Even if this rumor is way overhyped, I still can't wait for a MBP that can take at least 32GB of RAM. I'll be there with bells on!
Definitely glad other people are having a better experience.
Would be much better if the base model was 16/512, but that's $500 of nearly pure profit apple would lose.
I've recently used a 100 GB partition on my personal laptop for client work and it was fine.
(I'm not trying to defend the price of Apple's SSD upgrades.)
Instead of a DIMM, LPDDR basically puts an entire DIMM on one chip soldered to the board (or package on the M1). LPDDR topologies tend to be limited in how many ranks of memory you can have, which limits the total capacity.
I think if Apple really wanted to mac a "HEDT SoC", they would switch to "quad channel" (256bit wide) memory, which along with the jump to LPDDR5 would let them sell 64GB/128GB machines. The only existing SoCs with wide memory controllers is the Tegra Xavier though.
Best. Typo. Evar.
These things already don't act as if it's RAM as we're used to considering it. They could probably have shipped the first ones with 4G RAM and had 'em act relatively normal… with heavy stuff like 4K video editing. Not great, but normal. So they did 8G RAM to have 'em act more impressive… like doing said editing smoother with fewer dropped frames than far more powerful Intel machines with a lot more RAM.
If anything the M1's ram has _higher_ latency than the Zen3.
(Can't reply to the sibling comment but Intel's numbers are even better: https://images.anandtech.com/doci/16214/lat-10900.png)
Is it random accesses you want to test? Or TLB thrashing?
The R per RV prange benchmark accesses memory randomly, but in a TLB friendly pattern (TLB thrashing real world workloads aren't particularly common). The M1 gets 30ns, the 5960X is 55ns and still climbing at the max array size.
The 5950X does better on "full" random, although that's unclear if that's a faster TLB, or the testing with a different page size. With Linux for TLB thrashing workloads you can always use HUGE pages, not sure if the M1 hardware supports that though.
Just noting Zen3 is not Intel.
Please stop this non-sense on HN. DRAM is DRAM. L2 Cache is L2 Cache. Doesn't matter how good its 8-Channel LPDDR4X Memory Architecture sitting next to the Die in the same package it is still just DRAM.
It takes my M1 a little while longer to stitch together 4K videos than my 2017 MPB 16". My 2017 MBP screams and throttles down the CPU down to 60%-75% as soon as editing starts.
Then buy something else, full stop.
Even if the RAM were as fast as L1 cache, it still wouldn't do a thing for the SSD bottleneck.
M1 Macs have been shown to drive 6 displays, so this "3 displays" means "3 4K/5K displays" I supposed.
I just hope it isn’t internal + Touch Bar + 1 external
My current 16" MBP has no problem driving a total of five displays and 16,041,600 pixels - and I'd be hard pressed to go back to just two external displays.
I’ve found a Pro Display (6K) and two other Dell 4K works fine, additional to the internal display (it’s a MBP 16” and not running in clamshell mode).
MacBook 16”: 3072x1920
Pro Display: 6016x3384
Dell U2718Q: 3840x2160 (x2)
Total Pixels: 42,845,184
Modern hardware is frankly amazing. I remember the 22” Trinitron CRTs of years ago and owning and driving one of those was a feat, now we have multiple displays with PPI such that with average visual acuity everything is so incredibly crisp.
Seems like I can keep Linux and build a 5800X, and have > 32GB RAM to boot.
For those happy with MacOS as a daily drive, the Mac Mini is incredible value vs a full PC build.
And the Mac mini was about 1/3 the price. (I know it's not an equal comparison, but I was always using my laptop with the lid closed, hooked up to 2 4k monitors anyway.)
I will happily upgrade to more power. I don't miss not being able to run virtual Windows because I offloaded that to my home server anyway and RDP into it when needed.
5800X is a 105W TDP, the M1X claims to be a 45W TDP.
If you scale by TDP, the M1X is over 2x faster in multicore bench.
M1X: 321 score/watt
5800X: 146 score/watt
For example: DOS wasn't technically better than anything else; it was just the one that was being aggressive about adoption.
Or the fact that Sony was really in a position to OWN portable music in perpetuity, but went to sleep. Apple wasn't asleep, and didn't have outsized anxiety about digital music, so the iPod entered a market without a dominant "Digital Walkman".
The developer preview Minis they sold with an iPad Pro CPU were already well past midrange Intel desktop chips.
Intel started in an era when you did have to build your own fab. More importantly, even as others were going fabless, Intel continued to use its superiority in manufacturing as an additional advantage over their competition, who had to drop out of the fab business, and against the merchant foundries too.
But now Intel has fallen behind on both sides of the divide. I’m not sure how they will dig out.
TSMC is slave to Dutch ASML, whos is the only one on the planet who can make the EUV lithography technology TSMC uses to make the latest chips.
Apple can take a few different paths to this “M1X” release. They can
A. Release in Spring. Use the current cpu & node generation tech, and add cores.
B. Release in Fall. Use the now old (2020) cpu & old node, and add cores.
C. Release in Fall. Use the now new (2021) cpu, new node generation - and add cores.
Will be curious to see which path they take.
I’d bet Option B. This would be similar to what’s currently going on with the iPad Pro vs iPad Air
They aren't doing radical redesigns, so ramping up is easy and fast.
They've pulled an Osborne Effect on their Intel devices by announcing they are transitioning in just a couple years. Nobody wants to buy a brand-new second-class product.
Finally, launching their new laptops this fall means they are half-way through their transition time, but just getting started with their most powerful segment.
I don’t how big this group is as a percentage of Apple customers, but in an absolute sense it’s significant — several $B worth anyway.
Also there are people who won’t buy M1 because they are (probably correctly, though I haven’t had any problems) hearing that not everything works right yet. If my mother needed to replace her MacBook Air today I’d get her an Intel one. I expect her next machine with be Mx though.
I suspect Apple will continue selling Intel-based machines into 2022, and perhaps longer. I doubt they will update them though.
Will also be curious to see if these new MacBook Pro (M1X) will be the exactly same as what's used in the new iPad Pro.
people like to hate on the OS, but i find it super beautiful and easy to use
i don't know how to describe it, but i feel good when i use my mac
in contrast, everytime i have to boot windows, i feel like i'm oppressed by some corporate, i feel closed, the whole experience is boring, windows needs a serious REBOOT, not a reskin, a whole reboot, including deleting NTFS
Apple just doesn't let you do what you want, the way you want to do it. Any limitations that Linux has are purely technical, while Apple clearly limits users for profit. If you happen to be able to accept the limitations, great for you - I'd still feel dirty supporting a company that's clearly working towards a future where you have to pay them to put your own programs on your own devices.
Windows I just use for entertainment now but even for that set of functionality - I couldn't stand having to deal with a finicky Mac. My Mac Pro 2012 (which I bought used) won't even work with certain mice, keyboards or displays.
On my desktop i dualboot Linux/Windows, windows for games only, and linux for when i want 0 distractions at all and focus on programming
I wish i could nuke my windows partition and have linux/mac side by side, but gaming on linux is still a pain today..
Imagine drinking so much kool-aid that you literally don't like a filesystem for no reason other than Apple good Microsoft bad.
Is your use case something odd? I wouldn't expect to hit filesystem limits on a laptop anyway.
compiling projects is, and it is slow with NTFS + slow windows process creation
Intel is not the leader for 5+ years now.
Comparing to Intel is like comparing to PPC MacBooks - meaningless.
The only way to get an M1 is to buy a Mac anyways.
Unless/until you see a mass exodus of people from x86 based OSes the comparisons don't matter.
Sure, new Macs will be super fast, and that in itself will result in some conversions, but there are plenty of other factors that would keep people from switching to Mac.
When you'll be buying a new laptop, you won't be choosing between a 2018 MacBook and a 2021 MacBook, but between 2021 MacBook and offerings from Lenovo, Dell, Microsoft and others.
If you are upgrading, OS inertia may prevent you from choosing a machine that doesn't run the OS you're currently on. So a lot of Windows users may not even consider a Mac, and vice versa.
So yes, you can compare to the current gen and would be choosing between that.
I suspect you will be able to for another year or so. See comment from me elsewhere on this thread for my reasoning.
And as a friend keeps reminding me, this is their cheapest entry level processor :)
aarch64 allows some tricks that are much harder to pull off on amd64.
The single-core benchmarks I saw put the M1 against CPUs with amd64 cores running SMT (which is fair - SMT extracts about 20% of extra parallelism of an amd64 workload, something the M1 does using a humongous reorder buffer on a single thread) with the AMD (and latest Intel) parts showing a slight advantage.
They do so, however, at a much higher TDP which, if my own computers (all but the server under the desk) are any indication, will cause the x86's to throttle down in a couple seconds while the M1 will continue at full steam.
They also have a lot more cores.
Per high-performance core, M1 macs are tied or less efficient depending on the workload.
That's not what the benchmarks I see say. AMD's chips with 8 amd64 cores beat the M1, which has 4 big cores and 4 low-power ones. In some benchmarks, the x86 laptop CPUs record higher performance, but at a higher clock and with a higher TDP. I don't have the IPC numbers for either, but with a lot of execution ports, its huge reorder buffer and the less strict memory consistency model, I suspect the IPC to be higher on the M1.
When you are doing serious work, the M1 is really a four core CPU and nothing else.
M1 cores have higher IPC, yes. They are also bigger and slower. Larger reorder buffers and more flexible memory models mean bigger cores which mean slower clock speeds at the same power consumption.
The M1 also doesn't have SMT, which is an easy +20% IPC.
If you measure the performance per core and the power consumption per core, and of course I'm talking only high performance cores, then last gen x86 laptop CPUs are right there despite higher performance.
The performance of the M1 is remarkable, but the real killer feature is they managed to push that much performance while reducing power consumption.
I’d rather have a chip that scales to high performance when plugged in and lower GPU performance when I’m on say an airplane or in a cafe, then one whose GPU performance is a half decade behind.
Apple is selling super quiet hypermobile performance here, I see them as occupying a niche with no real competition.
The best argument you could make for the MacBook in this regard is for developers who might be working as nomads, and musicians who often lug their machines to their gigs.
But for someone doing heavy 3D, video, or gaming? I just don't see it. The M1 can't even run Tomb Raider above 30 fps. It's a 2 year old game, and isn't even that impressive. A GTX 1080 will yield 2x-3x that performance: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9...
I'm just saying, they got a long way to go before that capture the hardcore gamer or 3D rendering market.
I know a videographer who worked for news wire services who did just this and did his job on a MacBook sometimes on site. Helped him deliver news content faster. I don't think video editing on a MBP directly is as rare as you might assume.
No doubt that Mac's haven't turnee into ideal gaming machines, but I've gamed on ultraportables with a lot less GPU than this, I just never played a game like Tomb Raider.
The answer is no. 26.5W is only for the CPU. If you also need to use GPU then it throttles or consumes more.
Compare it to the previous (Intel) generation that uses 20W idle and 122W under load, while not matching the M1 in neither CPU nor GPU performance.
That being said, RAM and storage is pretty negligible. LPDDR4/X uses around 0.2-0.5w, and around 0.1-0.05w for storage.
There is also no reason to compare to Intel. The competition for Macs aren't older macs - newer macs are always going to be better than older macs. The competition is to what they could have been if they went for AMD, and to other computers of similar form-factors.
In both cases, you're comparing to AMD, not Intel.
Comparing it to a M1 will yield the same results. The lowest TDP i can could find was 65W for a Ryzen 3, which is still long way from M1 in both power consumption and performance.
Compare the 5900HS with the M1, which is a 35W SoC, just like the M1, and much, much faster than any Rzyen 3.
This holds for SoCs or for systems that manage total power equally.