That's like 80% of comments in the thread. Unfortunately HN his is not the place for sane discussion. It's a place for people to vent their Intel hate.
HN has changed significantly. Some years ago I quoted The Last Psychatrist and we had a discussion about the points. Lately I quoted TLP and was insulted and downvoted. People are not interested in learning new things, but following whatever cult they believe in.
That's not society issue but a community issue. The issue is solved when the community devolves over time and gets shitty enough to the point that members that care for informative conversations leave for greener pastures to new communities, leaving the dross behind to stay in their cult echo chamber. It happens to every social media platform and it's happening to HN.
I don't know about the Apple ecosystem, but have you seen ANYTHING using the NPU on PC? I have not. I own an AMD laptop with an NPU (Ryzen 9 8945HS) and the NPU has never seen a single percentage of utilization since the laptop was unboxed and put to use. And I actually have an interest in local AI, but all the stuff I use (like Ollama or ComfyUI) run on the GPU, even if they had support for the NPU (I do not think they do) I would not run that stuff on the NPU because it's just not competitive with the nvidia gpu that's also on my laptop.
To me, seeing intel and AMD include this sort of useless thing is anger inducing. I am paying for this. I want every inch of that silicon to be useful. Not detrimental waste of space, like the NPU.
Seeing "better NPU" in a sentence meant to market a CPU doesn't elicit positive emotions.
In the windows world, the one thing that might end up using an NPU is also the thing most people do not want: Windows Recall. And that feature, for now, is exclusive to Qualcomm ARM PCs, current x86-64 NPU owners can't get it.
> seeing intel and AMD include this sort of useless thing is anger inducing. I am paying for this.
So don't pay for it. No one is making you. Wait for a model that doesn't have an NPU, or buy an older model that doesn't. It's not like it won't still be fast enough.
How many years from now? There isn't any high end CPU in laptops without those useless things now.
> buy an older model that doesn't
I don't think you've ever shopped for laptops, or you're lucky and live in a country that is particularly plentiful for choices in PCs. Looking for the specific combination of having 32gb of ram, 1tb of SSD, an AMD CPU (with Intel's current manufacturing woes I was not willing to gamble), an NVIDIA GPU with a minimum of 8 gb of vram took far more efforts than I am normally to spend doing activities like shopping. And now you tell me "do all that while looking for a model that predates NPUs"?
Of course I could order online from god knows where but I like buying from retailers that are known to honor their warranty well and good since there's always the possibility of buying lemons and I don't feel like wasting time shipping crap myself when I could just exchange it in place if it happened.
Entirely up to you. Point is, vote with your money.
> There isn't any high end CPU in laptops without those useless things now.
Even not considering processors from Intel/AMD?
> I don't think you've ever shopped for laptops,
I've purchased 6 in my life.
> Looking for the specific combination of having 32gb of ram, 1tb of SSD, an AMD CPU (with Intel's current manufacturing woes I was not willing to gamble), an NVIDIA GPU with a minimum of 8 gb of vram took far more efforts than I am normally to spend doing activities like shopping.
So be less picky or find a laptop that lets up upgrade the parts.
> I don't feel like wasting time shipping crap myself when I could just exchange it in place if it happened.
Fine, but this is a compromise you are willing to make, just like paying for the NPU. That's my point.
Where are you getting better GPU benchmarks from? Afaik there’s not been public graphics benchmarks and Intel didn’t compare against Apple. Apples GPUs have generally been class leading for integrated graphics. I’d be surprised if Intel improved dramatically here, as their iGPU has been quite anaemic till recently.
Intel’s NPU is better but as noted in a thread higher up their average package wattage is a little over double (37 vs 15W) for a 20% performance claim.
M3 is around 3.5 TeraFLOPS, Lunar Lake is 5.2-6.5 TeraFLOPS. I'm sure more detailed benchmarks will be coming up soon, but realistically, there is no way to make up that gap.
Apple here says the Macbook Pro has 18 TOPS (compared to Lunar Lake's 48 TOPS)...it's not really in the same league.
https://www.apple.com/macbook-pro/
You can’t directly compare teraflops for GPU performance between different architectures unless you really only care about a single precision throughput, which is not a good metric. You do actually need real world graphics benchmarks to compare GPUs.
You also can’t compare NPU TOPs without knowing the baseline data type. Apple for the M3 uses FP16 whereas Intel uses INT8. You have to double the Apple number to get the raw data throughput (ignoring any other efficiencies for operations in different types).
It’s ~36 vs 48. So closer to 33% more for 100% more power use (impossible to measure just the NPU use though). The more comparable SOC for power use would be the M3 Pro
Gaming is essentially all done with FP32...so it's by far the best figure of merit (excluding issues with say RDNA3 dual issue which is rarely achieved in practice).
You will see gaming benchmarks come out soon, and Lunar Lake will be about 50% faster than the M3. (A secondary issue of course is how few latest gen games run on macOS....)
True, that's FP16, but it's not clear if M3's Neural Engine even supports INT8.
I'm sure M4 will make this much more competitive, but right now, Lunar Lake is overall a much more balanced architecture that most people would prefer, ceteris paribus....
Gaming is absolutely not all done with fp32. A lot of games actually target half precision , which is where most PowerVR based GPUs pull ahead. The majority of shaders and buffers are better suited for half.
It also ignores things like occupancy and memory throughput, among many other aspects of a GPU.
I think a 50% delta for GPU is very wishful thinking given even Intel are only claiming a 33% uplift versus meteor lake, which itself was behind the M3 line when compared against similar TDP.
Regarding the NPU, the M3 does support INT8. It’s just that between the M3 and M4 release, the rest of the industry started coalescing on INT8, hence the change in base type.
I expect the same will happen again now that NVIDIA are touting INT4 as their performance standard for marketing.
Intel Arc runs FP16 at 2:1 compared to FP32, and Battlemage on Lunar Lake is the same, and XMX FP16 is actually at 8:1. I don't think M3's GPU has a better ratio.
Of course there are many other aspects, but given it's Intel's latest architecture, which has improved efficiency tremendously (see https://cdrdv2-public.intel.com/824434/2024_Intel_Tech%20Tou... ) it's pretty unlikely M3 has any fundamental advantage.
Do you have any reference showing Neural Engine in M3 supports INT8 (and at 2x FP16? Just curious.)
I’m not saying the M3 has a fundamental advantage. I’m saying that it’s unlikely to be as high a difference as is being stated in real world use. I don’t think a SOC at half the power budget is going to be magically more powerful.
Possibly Battlemage running at 100% will use more power than M3's GPU running at 100%...it will take some detailed testing to track that+ Lunar Lake can be set at different TDP's (plus performance settings, on battery vs connected to power). Not to mention different "100%" GPU workloads.
At the end of the day though, users vastly prefer a more powerful built in GPU for the occasional game session...Intel is willing to pay for the transistors, and Apple reserves them for the M3 Pro instead.
It has tabs in explorer!! That's it...the only good thing. The start menu still flakes out and fails to search your apps. It's literally useless garbage until it decides to start working again. Meanwhile my flow is broken by navigating through the file system to start Visual Studio Code.
reply