Hacker News new | past | comments | ask | show | jobs | submit | polishTar's comments login

Oracle Chrome, Ha! I wouldn't even be surprised if that's exactly how it plays out.


I'm sure Meta is doing some math on what they can afford.


God help us, haha!


Fast inverse square root is now part of the public domain.

Also, even if this weren’t the case you can’t sue for damages to other people (they’d need to bring their own suit)


Is the particular implementation that the model spits out 70+ years old?


[deleted]


But copilot distributed it (allegedly) without complying with the GPL license (which requires any distribution to be accompanied by the license) so it still would be an instance of copyright infringement. https://x.com/StefanKarpinski/status/1410971061181681674


Has it really already been 70 years since John Carmack died?


Ah, you're right. I was wrong to say "public domain".

It would be more correct to say Quake III Arena was released to the public as free software under the GPLv2 license.


There is a large gap between public domain and GPL. For starters if Copilot is emitting GPL code for closed source projects... that's copyright infringement.


That would be license infringement, not copyright infringement.


Copyright infringement is emitting the code. The license gives you permission to emit the code, under certain conditions. If you don't meet the conditions, it's still copyright infringement like before.


No.

Copyright infringement could be emitting the code in a manner that exceeds fair use.

The license gives you permission to utilize the code in a certain way. If Copilot gives you GPLed code that you then put into your closed source project, you have infringed the license, not Copilot.

> If you don't meet the conditions, it's still copyright infringement like before.

Licensing and copyright are two separate things. Neither has anything to do with the other. You can be in compliance with copyright, but out of license compliance, you can be the reverse. But nothing about copyright infringement here is tied to licensing.

To be clear: I am a person who trashed his Reddit account when they said they were going to license that text for training (trashed in the sense of "ran a script that scrubbed each of my comments first with nonsense edits, then deleted them"). I am a photographer who has significant concerns with training other models on people's creative output. I have similar concerns about Copilot.

But confusing licensing and copyright here only muddies waters.


Without adhering to the conditions of the GPL you have no license to redistribute the code and are therefore infringing the copyright of the author.


Apparently, the court disagrees with you, and doesn't find "emitting" the code a copyright infringement.

It'd be a long bow to draw to say that what is akin to a search result of a snippet of code is "redistributing a software package".


For enterprise, Google has something called Vertex AI which has a strict privacy policy but you have to pay just like any other typical cloud service.

Presumably Gemini 1.5 will be available there soon.


what do you mean?

If you've ever used a VR headset you'll know the weight is extremely important when it comes to comfort. This isn't just a random marketing metric.


"We designed [the Quest 3] to weigh 120 grams less" implies that the AVP is the benchmark by which other headsets are measured. It's funny to say you "designed it to weigh less" than a product that wasn't even unveiled yet.


A lot of patent trolls are very small companies though


It would have to be managed by tracking the number of active patents. You get 100 active patents tax free. Over that, and you have to pay an annual fee. This allows for independent inventors to operate as the system intended while clamping down on NPEs.


I got something totally different when I asked: https://g.co/bard/share/d5830c43d539


Great, nothing like software that works sometimes.


still, Bard impresses with the specification of the 1973 engine in GP's example


Gatik has yet to remove safety drivers as I understand it.


Eh, it depends.

I don't know if this is the case, but if these images were released to press & public as part of a media resource pack with a permissive license used to market the movie (which I believe is commonly done in this industry), I'd have a hard time empathizing with the viewpoint that Midjourney is doing something wicked by including it in their training data.


https://www.gadgets360.com/entertainment/news/dune-2020-firs...

You are right, the regurgitation was indeed produced by the studio for promotional purposes. But notice how the image is rendered at the link: with credit and copyright. And specifically, for promotional purposes. And while I don't presume to know the specifics of the licensing of that image, I wouldn't either assume that this use is licensed. Especially without copyright and credit.


Other ridehail products like uber or lyft have 100% human intervention all the time. I think that's what the parent comment is referring to.


In addition to the reduced memory bandwidth, the M3 pro also loses 2 performance cores for only 2 more efficiency cores.

M2 pro: 8 performance cores + 4 efficiency cores.

M3 pro: 6 performance cores + 6 efficiency cores.

Not a great trade... I'm not sure the M3 pro can be considered an upgrade


Depends. Is it faster? Then it's an upgrade.

Has the CPU industry really managed to pull off it's attempt at a bs coup that more cores always === better?

I thought we'd learned our lesson with the silly Mhz Myth already?


I guess we'll have to wait for benchmarks but I did find this interesting:

Apple's PR release for M2 pro: "up to 20 percent greater performance over M1 Pro"

Apple's announcement for M3 pro: "up to 20 percent faster than M1 Pro" (they didn't bother to compare it to M2 pro)


Sure, that's the title, but at least in this PR they immediately show a graph with a comparison to both.

Presumably it makes more marketing sense to compare to the M1 family up front because most people that bought an M2 last year are probably not going to be upgrading to M3. They are speaking to the people most likely to upgrade.


fwiw, i cant remember the last time i saw a company go back more than a generation in their own comparison. Apple is saying here as much as they're not saying here. M2->M3 may not be a compelling upgrade story.


The vast majority of Mac users go years between upgrades. For any other vendor it might seem weird to show several comparisons going back multiple generations (M1 and x86), but for the macOS ecosystem it makes perfect sense since only a very tiny slice of M2 users will be upgrading.


and what makes you think windows users update their devices every single generation?


Windows has distinct groups: the people who buy whatever costs $700 at Costco every 10 years / when it breaks don’t care but there’s also a vocal enthusiast community who do upgrade frequently. That group gets more attention since it’s a profitable niche and gaming generates a lot of revenue.


I used buy a $700 Windows laptop every 18 months in the 2000s. Then I got fed up with them just falling apart and switched to Macbooks. My 2013 purchase is still alive and being used by the kids.


In the 2000s, I went through a wide variety of PC laptops (Lenovo, Toshiba, Dell, Alienware, Sony, etc.) all within the range of $1200-$6500 and they all died within 3 years (except for the cheapest one which was a Lenovo with Linux). Some died within a year.

When my first Macbook lasted for more than 3 or 4 years I was surprised that I was upgrading before it died. I went through many upgrades with almost zero issues(one HDD failure, one battery failure). I still have a 2012 Macbook Pro that I've since installed Linux on.

When I bought the first touchbar Macbook (late 2015?) I spent around $6k maxing out the options, and I was surprised at how totally trash it was. Hardware QC issues were shocking: particles under the screen from manufacturing, keys stuck within the first hour of usage, external monitor issues, touchbar issues...

I haven't bought a laptop since.


Did you buy a macbook for $700? That was a pretty low price back then which meant you were buying devices made to a price. Buying a Macbook is one solution, another would have been to spend more money on a higher quality Wintel system.


No, it was around $1100 IIRC, maybe as much as $1300.


Right, so when spend twice as much you wind up with a better device. I think this might be only tangentially related to the fact that it was an Apple product, rather, you weren't purchasing the cheapest available device.


Ten years ago Apple was by far the highest quality laptop manufacturer. There was essentially no other option back in the early 2010s. Even now laptops with a "retina" display are not always easy to find for other manufacturers. In retrospect, that was probably the killer feature which induced me to switch.


Yeah, the quality of PC laptops has improved but that really just means you can get closer to equivalent quality at equivalent pricing. I've heard people claim to have saved a ton but every single time I used one there was some noticeable quality decrease, which I find kind of refreshing as a reminder that the market does actually work pretty well.


Did you treat the MB differently because you paid more? If so, that may have yielded longer life in addition to quality design, etc.


Not really. The difference in build quality was night and day; metal vs. plastic, keyboard that doesn't flex, etc.


Windows users buy whatever, from so many brands, that it doesn't matter how often they upgrade, they're likely to not upgrade from the same vendor anyway (so that the comparison to its older generations to be meaningful in the first place).


> and what makes you think windows users update their devices every single generation?

They don't, but the difference is that Windows users generally don't know or care about processor generations. In contrast, it's common for Mac users to know they have an "old" Intel-based Pro, an M1 Air, etc., and to use that knowledge to help determine when it might be time to upgrade.

You can test this by asking Windows users what CPU they have. For the few who know and who have an Intel CPU, you can ask what their Brand Modifier¹ (i3/i5/i7) is. If they know that, you can ask what the 5-digit number following the Brand Modifier is — the first two digits are the Generation Indicator¹. I'd be surprised if more than 0.01% of Windows users know this.

¹ Intel's name


Intel's CPU naming strategy used to drive me nuts when trying to talk to anyone at work who knew "just enough to be dangerous." Why is X so slow on this machine, it's got an [6 year old, dual core] i5! It runs fine on my laptop and that's only an [1 year old, quad-core] i3!


> it's common for Mac users to know they have an "old" Intel-based Pro, an M1 Air, etc., and to use that knowledge to help determine when it might be time to upgrade.

Not at all. I've worked with FANG developers with brand new M1 MBPs that had no idea what 'm1' meant until something broke.


like everything you said could apply to nvidia gpus as well


man, that's a whole lot of mental gymnastics to justify scummy benchmark practices from apple.


How are they scummy? The M3 vs. M2 performance improvements they showed looked pretty modest.

My interpretation while watching the event is that this is a company persuading x86 holdouts to upgrade to Apple Silicon, and maybe some M1 users as well.


It’s absolutely not, and that’s fine. The video has statements that the machines are made to “last for years” and they want to save natural resources be making long lasting machines.

I’m currently at 4 to 5 years on laptops and 3 to 4 years on phones, and even then I hand them over to kids/friends/family who get a bit more use out of them.


> they want to save natural resources be making long lasting machines.

Apple always comes from a position of strength. Again, they're saying as much as they're not saying.

Also, if they really cared about long lasting machines: slotted ram and flash please, thanks!


Huh. So they used to do this, but looking at the M series chips it seems like the architecture assumes the CPU-GPU-RAM are all on the same chip and hooked into each other, which enables zero copy. Someone more well versed in hardware could explain if this is even possible.

Expandable internal storage would be nice, yeah. But I get the sealed, very tightly packed chassis they’re going for.


> get the sealed, very tightly packed chassis they’re going for

The Dell XPS 17 is only 0.1 inch thicker yet has fully replaceable RAM and 2(!) m2 slots. I’m pretty sure what Apple is going for is maximizing profit margins over anything else..


I have an XPS 15. And while I liked that I could bring my own SSD and RAM, the build quality is nowhere near a Macbook Pro... like not even in the same galaxy. I had to have it serviced multiple times within the first few weeks. It had to be sent to Texas, and when it returned, one WiFi antenna wouldn't plug into the card, and the light on the front was permanently broken. I could have demanded Dell fix it - and I'd have been even more weeks without my main work laptop. So, by pure numbers/specs? Sure. By real world quality, no way would I favor Dell.


The issue is often comparing apples (heh) to oranges.

I understand the desire for slotted RAM, but the major limiting factor for nearly 10 years was CPU support for more than 16G of RAM. I had 16G of ram in 2011 and it was only 2019 when Intels 9th Gen laptop CPUs started supporting more.

The Dell XPS 17 itself has so many issues that if it was a Macbook people would be chomping at the bit, including not having a reliable suspend and memory issues causing BSOD's. -- reliability of these devices, at least when it comes to memory, might actually be worse and cause a shorter lifespan than if it had been soldered.

Of course it always feels good to buy an underspecced machine and upgrade it a year later, which is what we're trading off.

But it's interesting that we don't seem to have taken issue with BGA CPU mounts in laptops but we did for memory, I think this might be because Apple was one of the first to do it - and we feel a certain way when Apple limits us but not when other companies do.


There’s a lot of flat-out wrong information in this post. For one, even the low-power (U-series) Intel laptop CPUs have suported 32GB+ of memory since at least the 6th generation[1]. Many machines based on these CPUs unofficially support more than that. I have a Thinkpad with an i7-8550u and 64GB of DDR4, and it runs great.

On top of that, the higher-power laptop SKUs have supported 64gb or more since that time as well.

Secondly, it’s silly to claim that having RAM slots somehow makes a computer inherently more unstable. Typically these types of issues are the result of the manufacturer of the machine having bugs in the BIOS/EFI implementation, which are exacerbated by certain brands/types of memory. If you don’t want to mess around with figuring that stuff out, most manufacturers publish a list of officially-tested RAM modules which are not always the cheapest in absolute terms, but are always night-and-day cheaper than Apple’s ridiculous memory pricing.

[1] https://www.intel.com/content/www/us/en/products/sku/88190/i...


Sorry, you're entirely mistaken, there is no business laptop that you could reasonably buy with more than 16G of RAM. I know because I had to buy a high end workstation laptop (Dell Precision 5520 FWIW) because no other laptop was supporting more than 16G of RAM in a thin chassis.

No Dell Latitude, Elitebook, Thinkpad X/T-series or even Fujitsu lifebook supported a CPU that was permitting greater than 16GiB of memory.

I know this because it was something I was looking at intently at the time and was very happy when the restrictions were lifted for commercially viable laptop SKUs.

Citing that something exists predisposes the notion of availability and functionality. No sane person is going to be rocking into the room with a Precision 7520 and calling it portable. The thing could be used as a weapon and not much else if you had no power source for more than 2hrs.

Also, socketed anything definitely increases material reliability. I ship desktop PC's internationally pretty often and the movement of shipping unseats components quite easily even with good packing.

I'm talking as if I'm against socketed components, I'm not, but don't pretend there's no downsides and infinite upgrade as an upside, it's disingenuous, in my experience there are some minor reliability issues (XPS17 being an exceptional case and one I was using to illustrate that sometimes we cherry pick what one manufacturer is doing with the belief that there were no trade offs to get there) and some limitations on the hardware side that limit your upgrade potential outside of being soldered.


> Sorry, you're entirely mistaken, there is no business laptop that you could reasonably buy with more than 16G of RAM.

> No Dell Latitude, Elitebook, Thinkpad X/T-series or even Fujitsu lifebook supported a CPU that was permitting greater than 16GiB of memory.

Here are the Lenovo PSRef specs for the Thinkpad T470, which clearly states 32GB as the officially-supported maximum, using a 6th or 7th gen CPU:

https://psref.lenovo.com/syspool/Sys/PDF/ThinkPad/ThinkPad_T...

This is not a behemoth of a laptop; I'm writing this on a T480 right now, which supports 32GB officially and 64GB unofficially, and it weighs 4lbs with the high-capacity battery (the same as the T470).

I can't tell if you're trolling or what, but if you're serious, you clearly didn't look hard enough.

Edit: since you mentioned Latitudes, Elitebooks, and Fujitsu lifebooks:

- Dell Latitude 7480 (6th gen CPUs) officially supports 32GB: https://www.dell.com/support/manuals/en-us/latitude-14-7480-...

- HP Elitebook 840 G3 (6th gen CPUs) officially supports 32GB: https://support.hp.com/us-en/document/c05259054

- For Lifebooks, I couldn't find an older one that supported 32GB, but this U937 uses 7th gen CPUs, and has 4GB soldered and one DIMM slot which supports up to 16GB. This is a total of 20GB, again, breaking the 16GB barrier: https://www.fujitsu.com/tw/Images/ds-LIFEBOOK%20U937.pdf

I believe these are all 14"-class laptops that weigh under 4 pounds.


One more thought: you might be getting confused here with the LPDDR3 limitation, which was a legit thing that existed until the timeframe you're thinking of.

Any laptop which used LPDDR3 (soldered) typically maxed out at 16GB, but as far as I'm aware, this was due to capacity limitations of the RAM chips, not anything to do with the CPUs. For example, the Lenovo X1 Carbon had a 16GB upper limit for a while due to this. I believe the 15" MacBook Pro had the same limitation until moving to DDR4. But this is entirely the result of a design decision on the part of the laptop manufacturer, not the CPU, and as I've shown there were plenty of laptops out there in the ~2014-2016 timeframe which supported 32GB or more.


Intel actually has this documented all on one page: https://www.intel.com/content/www/us/en/support/articles/000...

DDR4 support was introduced with the 6th gen Core (except Core m) in 2016, LPDDR4 support didn't show up until (half of) the 10th gen lineup in 2019. It's just another aspect of their post-Skylake disaster, wherein they kept shipping the same stuff under new names for years on end before finally getting 10nm usable enough for some laptop processors, then a few years later getting it working well enough for desktop processors. In the meantime, they spent years not even trying to design a new memory PHY for the 14nm process that actually worked.


Yeah, this link is helpful, but IMHO doesn’t actually call out the specific problem I was referring to, which is that only laptops that used LPDDR3 had the 16GB limitation. If the laptop used regular DDR3, or DDR4, it could handle 32/64GB. The table lumps everything together per processor model/generation.


They haven't made slotted ram or storage on their macbooks since 2012 (retina macbooks removed the slotted ram afaik). It might save on thickness, but I'm not buying the slim chasses argument being the only reason, since they happily made their devices thicker for the M series cpus.


> It might save on thickness, but I'm not buying the slim chasses argument being the only reason

Soldered memory allows higher bus frequency much, much easier. From a high frequency perspective, the slots are a nightmare.


It's not soldered. It used to be, but ever since the M1, it's in-CPU. The ram is actually part of the CPU die.

Needless to say it has batshit insane implications for memory bandwidth.

I've got an M1, and the load time for apps is absolutely fucking insane by comparison to my iMac; there's at least one AAA game whose loading time dropped from about 5 minutes on my quad-core intel, to 5 seconds on my mac studio.

There's just a shitload of text-processing and compiling going on any time a large game gets launched. It's been incredibly good for compiling C++ and Node apps, as well.


the ram is not on die, and 5 min to 5 sec is obviously due to other things, if legit


Sounds like the iMac had spinning hard disks rather than SSD storage.


Yup. I’ve been looking at the Framework laptop, and it’s barely any thicker than the current MacBook Pro.


I have no excuse for flash, but memory can't really be slotted anymore since SODIMM is crap. High hopes for CAMM making it's way into every other machine 2024!


Given that there is a legally mandated 2-year warranty period at least in Europe, I would be surprised if any laptops weren’t made to “last for years”.

The problem with Apple, however, is that their hardware will long outlive their software support. So if they really want to save natural resources by making long-lasting machines, they should put much more effort into sustained software support.


Yes my MacBook Pro 2010 is still going strong.

But, drivers are only available for win 7 and macOS High Sierra was the last supported version.

Luckily Linux still works great.


> i cant remember the last time i saw a company go back more than a generation in their own comparison

Apple likes doing that quite frequently while dumping their "up to X% better" stats on you for minutes.


Nvidia did it when they released the RTX 3080 / 3090 because the RTX 2000 series was kind of a dud upgrade from GTX 1060 and 1080 Ti


Apple always does game comparisons like this for their conferences though. The intel era was even worse with this iirc.


Intel era there wasn’t much to game, they’re using the same chips as all the PC competitors. The PowerPC era, on the other hand…


The majority of MacBooks out there are still intel based. This presentation was mostly aimed at them & M1 owners.


Is it a problem, though? The vast majority of people skip generation and for them the relevant reference point is what they have, which is going to be hardware from a couple of generations ago. M2 -> M3 does not have to be compelling: the people with M3 devices are a tiny fraction of the market anyway.

I find it interesting how people respond to this. On one side, it’s marketing so it should be taken critically. OTOH, if they stress the improvements over the last generation, people say they create artificial demand and things about sheeple; if they compare to generations before people say that it’s deceptive and that they lost their edge. It seems that some vocal people are going to complain regardless.


Given how strong they emphasised the performance over the Intel base - who now have had their machines for 4 years and are likely to replace soon (and may be wondering if they stay at Apple or switch over to PCs), it is pretty obvious that they also want to target that demographic specifically.


That’s not what it says. Actual quote:

> The 12-core CPU design has six performance cores and six efficiency cores, offering single-threaded performance that is up to 30 percent faster than M1 Pro.


Ok, so then the M3 pro is up to 1.3/1.2=~8% faster than the M2 pro? I can see why they wouldn't use that for marketing.


Depends who they are marketing to I think is the point. If the biggest group of potential buyers are not M2 users, then it makes sense not to market to them directly with these stats.

I've got an M1 Max 64GB and I'm not even tempted to upgrade yet, maybe they'll still be comparing to M1 when the M10 comes out though.


I'm also far from replacing my M1. But if someone from an older generation of Intel Macs considers upgrading the marketing is off as well.


I was referring to the graphic they showed during the announcement that verbatim said the CPU was "up to 20% faster than M1 Pro".

https://images.macrumors.com/t/wMtonfH5PZT9yjQhYNv0uHbpIlM=/...


Plausibly they thought market is saturated with M1:s and targeted this to entice M1 users to switch.


> Depends. Is it faster?

The devil tends to be in the details. More precisely, in the benchmark details. I think Apple provided none other than the marketing blurb. In the meantime, embarrassingly parallel applications do benefit from having more performant cores.


Heh, I recall seeing many posts arguing against benchmarks when all Macs equipped with an M2/8GB/256GB SSD scored much, much lower than the M1/8GB/256GB SSD. People said the synthetic benchmarks were not representative of real world use and you'd never notice the difference. 'Twas a battle of the optimists, pessimists, and realists. In reality, 'twas just Apple cutting costs in their newer product.


> Heh, I recall seeing many posts arguing against benchmarks (...)

It's one thing to argue that some real-world data might not be representative all on itself.

It's an entirely different thing to present no proof at all, and just claim "trust me, bro" on marketing brochures.


oh absolutely, I can't wait to see the benchmarks. Per the (non-numerical data) benchmarks in the video tho - it is faster. So... until other evidence presents itself, that's what we have to go on.


> Has the CPU industry really managed to pull off it's attempt at a bs coup that more cores always === better?

I thought this at first then I realized the cost-performance benefit gained from adding more cores often outweighs just improving the performance of single cores. Even in gaming. I think this is what led AMD to create their Ryzen 9 line of CPUs with 12 cores in 2019.

That being said, I abhor the deceptive marketing which says 50% more performance when in reality, it's at most 50% more performance specifically on perfectly parallel tasks which is not the general performance that the consumer expects.



Few game devs bother optimizing games to take advantage of multiple cores


I find that frustrating with how intel markets its desktop CPUs. Often I find performance enhancements directly turning off efficiency cores...


Faster than what? M1 Pro? Just barely.


Reference should be M2 pro


I suspect it's about equal or perhaps even slower.


Based on what? The event video says it's faster.


M2 Pro was about 20-25% faster than M1 Pro, M3 Pro quotes a similar number. It has faster cores but a weaker distribution of them. Seems like a wash, but we'll see exactly how close when benchmarks are out.


2.5x is "just barely"? lol k.


> 2.5x is "just barely"? lol k.

That's only rendering speed, and M3 Max vs M1 Max (not Pro). M3 Pro is only 30 percent faster:

> The 12-core CPU design has six performance cores and six efficiency cores, offering single-threaded performance that is up to 30 percent faster than M1 Pro.


20%


Let me re-write your post with the opposite view. Both are unconvincing.

<< Depends. Is it faster? Then it's an upgrade. Has the CPU industry really managed to pull off it's attempt at a bs coup that more MHz always === better?

I thought we'd learned our lesson with the silly cores Myth already? >>


I think you're misreading the comment you're replying to. Both "more cores is always better" and "more MHz is always better" are myths.


Yup, exactly what I was saying.


Yes, but the number of cores in similar cpus do provide a good comparison. For example, with base M2pro at 6 p cores and base M3pro at 5 p cores, one would want ~20% faster cores to compensate for the lack of one core in parallel processing scenarios where things scale well. I don't think M3 brings that. I am waiting to see tests to understand what the new M3s are better for (prob battery life).


That's... the same view, just applied to a different metric. Both would be correct.

Your reading comprehension needs work, no wonder you're unconvinced when you don't even understand what is being said.


That makes less sense because the MHz marketing came before the core count marketing.

I agree with GP that we should rely on real measures like "is it faster", but maybe the goal of exchanging performance cores for efficiency was to decrease power consumption, not be faster?


Probably a balance of both tbh, as it appears to be both faster AND around the same performance per watt.


The new efficiency cores are 30% faster than M2, and the performance ones 20% faster, so lets do the math:

    M2: 8 + 4

    M3: 6*1.2 + 6*1.3 =
        7.2 + 7.8
That’s nearly double the M2’s efficiency cores, a little less on the performance ones.

They do say the system overall is up to 65% faster, and has lower power consumption at the same performance level.


You're not considering the difference in performance between the p and e cores. The math should be something more like:

  M2 pro = 8*3 + 4 =28 (the *3 representing that the performance cores contribute ~3x more to total system performance than the efficiency cores)

  M3 pro = 6*3*1.15 + 6*1.3 =28 (apple claims 15% more performance for the p cores not 20%)
> They do say the system overall is up to 65% faster, and has lower power consumption at the same performance level.

They don't claim either of those things. They claim the performance is 20% faster than the M1 pro. Interestingly, they made that exact same claim when they announced the M2 pro.

Energy efficiency might be better, but I'm skeptical till I see tests. I suspect at least some of the performance gains on the p+e cores are driven by running at higher clock rates and less efficiently. That may end up being more significant to total energy consumption than the change in the mix of p/e cores. To put it another way, they have more e cores, but their new e cores may be less efficient due to higher clock speeds. Total energy efficiency could go down. We'll just have to wait and see but given that apple isn't claiming an increase in battery life for the M3 pro products compared to their M2 pro counterparts, I don't think we should expect an improvement.


If you wanted to be even more accurate, you'd also have to take into account that most tasks are executed on the E cores, so having more of those, or faster, will have a much greater impact than any improvement on the P cores. It's impossible to estimate the impact like this - which is why Apple's performance claims[1] are based on real-world tests using common software for different workloads.

In summary, there is supposedly improvement in all areas so the reduced P core count doesn't seem to be a downgrade in any form as the OP suggested.

[1] https://www.apple.com/nl/macbook-pro/


I wouldn't trust Apple's marketing on that if it's where you got those numbers from


E cores are ~30% faster and P about 15%. So the question would be how much the Es assist when Ps are maxed on each chip. In any other situation, more/better E cores should outperform and extend battery. I’m not saying that means you should want to spend the money.


I love Apple's E cores. It just sucks that the M3 pro gains so few given the reduction in P cores.

Apple's E cores take up ~1/4 the die space of their P core. If the M3 pro lost 2 performance cores but gained 4-8 efficiency cores it'd be a much more reasonable trade.


I’m sure the difference is GPU.


I’d like to see that. Good point about die space.


Could you not resolve these questions with benchmarking?


Depends on what you consider an upgrade. As M3 cores perform better than M2 cores, I expect the M3 configuration to perform similar to the M2 one, even though it trades performance cores for efficiency cores. Apple apparently believes that its users value improved efficiency for longer lasting battery more than further improved performance.


Functionally, how does this impact observed performance on heavy loads like code compile or video manipulation? I doubt it's not much, and these are the low/mid-tier priced machines we are talking about.

If you bought a $2k M2 machine and traded it for a $2k M3 machine, you may gain better battery life with no concessions, except for benchmark measurements (that don't affect your daily work).


These are not low/mid tier machines when talking about "consumer-grade".


Yeah.

$2K-3K is what my 3090/7800x3D sff desktop cost (depending on whether you include the price of the TV/peripherals I already own).


Within the MacBook Pro lineup, they are objectively the low and mid-grade pricing tiers.


Indeed, but that's a bit of an oxymoron as any Macbook Pro is not a "low/mid-tier priced machine"


We all know what is meant by “low/mid-tier”. This is pointless pedantry. Next someone is going to come by with the throwaway comment complaint about how OpenAI isn’t “open”.


Fair enough, I was just arguing even Mac users might not have the cash or the patience to commit into another machine.

We've seen the same with Nvidia's GPUs going from the 10 to 20 series. If people don't perceive higher gains without compromises, they won't buy it.


Then why do they come with (low end) consumer level storage and memory capacity?


Different people have different needs. I certainly need a MacBook Pro for my work, but I use next to no storage. I’ve never purchase beyond the minimum storage for an Apple computer. I did however up the processor on my current MacBook Pro.

Minimum 8GB RAM is more universally egregious but I’m not going to sit here and justify my own exception whilst discounting the possibility that 8GB works for others.


The cost for adding an extra 8GB would be insignificant for Apple, though. The only reason they don’t is to upsell higher tier models


It would make them less money. /thread

To be fair– While 8GB is annoying– I've bought the M1 MacBook Air when it came out and it's remarkably resilient. I've only had it freeze a few times due to too little RAM.

I've also been using many different programs. I just have to be a tad mindful about closing tabs (especially Google tabs) and programs.


This makes going Mac Mini M2 Pro over iMac M3 feel real compelling. The respective prices of these models are in fact the same, so if you happen to have a good monitor already... (also the iMac M3 curiously doesn’t even have a Pro option.)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: