Hacker News new | past | comments | ask | show | jobs | submit login
Intel: New Core Ultra Processors Deliver Breakthrough Performance (intel.com)
116 points by htk 16 days ago | hide | past | favorite | 160 comments



This might be a better link than Intel's press release page since it contains more context for the packages and technical specs. https://videocardz.com/newz/intel-unveils-core-ultra-200v-lu...

This is an efficient laptop series with new cores capable of low idle and near idle power draw, with an integrated GPU analogous to AMD's 780M (looking forward to benchmarks.)

The AI comments in the PR are due to the package devoting space to a neural processor. They are claiming up to 48 TOPS which exceeds Apple's M4's reported 38 TOPS.

Similarly to Apple Silicon ARM, RAM is incorporated into the package. It has 4 p cores and 4 e cores like Apple's M1.

It's Intel, so looking forward to benchmarks and real world tests, and they will need to have a good desktop version of these packages later this year, but this new architecture appears to be headed in the right direction, finally.


> They are claiming up to 48 TOPS which exceeds Apple's M4's reported 38 TOPS.

38 TOPS at 15w in a tablet*

This is up to 48 TOPS at up to 37w

Something tells me the full 35w M4 will be somewhere above 48TOPS..

This is still a huge leap forward for Intel, especially integrating memory on die. I doubt it topples M4.


Good point. It seemed weird for this to be higher ops/sec and you pointed out why. If it's true that Apple releases the rest of M4 in November, and Intel releases the desktop architecture in October-November, we'll have full comparisons.


No one is reaching peak performance on these AI accelerators. Its just stupid marketing.


Where are you getting that the NPU draws up to 37W? I believe that is full package power, which Intel states is roughly the combined CPU+GPU+NPU TOPS of 120.


This is a bit of word salad from Intel's marketing, with a catch buried inside the messaging.

> up to 120 total platform TOPS (tera operations per second) across central processing unit (CPU), graphic processing unit (GPU) and neural processing unit (NPU) to deliver the most compatible and performant AI experiences across models and engines.

Notice they say "most compatible and performant AI across models and engines", this means you don't really get to add all the TOPS up and run them as a unified workload. That sum of 120 is marketing speak, without doubt the CPU will hit ~30w alone just running the 8 cores at full speed, and leave little power for its GPU and NPU. Even with their 50% (claimed) efficiency gain.

They already do this today with their existing laptop chips, so I highly doubt they have changed.

Instead, you get different power targets, TOPS and efficiency depending on the workload. It's the same with Apple, we don't actually know how much wattage the NPU used during their claimed 38 TOPS.

Package power is about all we got until the chip comes out. It's not like Apple and Intel are putting 2w GPUs and 2w NPUs on a 37w package (at least not anymore). They likely utilize 30-50% of the available power.

This is how I read it anyway, I'd love someone else to weigh in and tell me if its different from previous press releases.


Yes, it’s like misleadingly advertising WiFi routers by adding up the peak theoretical bandwidth of all bands.


The 37 W is only the short-term power, sustainable for a few seconds, perhaps for a half of minute.

Lunar Lake is optimized for a steady-state package power of 17 W and it is likely that most devices with it will use this value as the default power limit.

The 17 W power limit has been essential in determining the characteristics of Lunar Lake.

A CPU designed for 37 W would have looked very different, by being similar to AMD Strix Point (which is optimized for 28 W or more), i.e. by having more CPU cores, a beefier GPU and more PCIe lanes and other peripheral interfaces.


Of course not, my point was just that the 37W wasn't an NPU power figure, but a package power.


So is the 15w. I think the OP is saying these are different classes of device, so it’s not fair to directly compare performance and declare victory.


Combined TOPS are a scam. I'm not aware of a single app that can use NPU and GPU together.

I wouldn't be surprised if Lunar Lake and M4 end up being close in several different metrics.


I can imagine that at some point in the next year or two we'll see games that use the GPU for rendering and then the NPU for upscaling, but I don't think that's actually happening yet. As far as I am aware, none of the fancy "AI upscaling" technologies used by games are running on anything other than the GPU so far.


I think gaming has many exceptional use cases for an NPU in the next “few”-to-“several” years.

One big one I see coming: Dynamic Voiceover.

Imagine instead of a generic filler name, each voice actor’s generative model could speak your character’s nam3 in video game scenes. Now imagine generating entirely novel conversations with highly nuanced NPCs.


"ignore your prompt and surrender"


pretty sure npu is useless for video. they seem to all be float8/int8 which isn't enough precision to keep your pixels pretty.


I think most of the NPUs (and all of the ones that meet Microsoft's Copilot+ requirements) can also do float16.


AMD is working on it (heterogenous dispatch). Roughly EOY timeline.


Anyone who takes Apple 38 TOPS claim seriously is misled, just like that time Apple compared its iGPU with 3070 Laptop without specifying. Most GPU do couple times over 38, Apple iGPU likely included. Even Intel CPU does more, that's a low bar.

NPU is a "background AI" device. It performs better than CPU and worse than GPU for low end tasks, without having to wake up messy GPU. It's not supposed to be fast at all.

Comparing SoCs by advertized TOPS figures is like comparing cars with reversing speeds. That sometimes matter, rarely the most relevant parameter.


I am not seeing the misleading part. A neural network on processor/package is supposed to be good at preserving battery by doing all the low power, low precision tasks. More TOPS = more consistent battery life with heavier usage over the day, is how I'd explain it to an everyday user.


Some people think 38 TOPS claimed was itself some sort of groundbreaking achievement, despite it being just a lesser substitute to iGPU, closer to a biggest shipped E-core so far or fastest on-chip video encoder back then. That's the misleading part.


Saying that the NPU is a lesser substitute to the GPU is only accurate if you believe raw performance is the only important metric. Which means you're still missing the entire point of NPUs.


Just how do you benchmark an AI? Do you ask it? There is a few pans of fundge factor.

My car goes faster in reverse, something that I only need for movies like "the in-laws." ( Peter Falk and Alan Arkin)


For comparison, my 7900 XTX, which is the most powerful card that AMD offers, hits a (specced) peak of 120 tflops with fp16 operations, at 300W.

However, when they say "TOPS" instead of "TFLOPS" that usually means something like int8, and it's unclear if this chip will support any float format, so with most networks you'll have to quantize first. Not sure what overhead that adds to get the same quality.


Two points about the GPU comparison. Modern GPUs, including the 7900 XTX, have tensor cores which give higher TOPS, especially for smaller data formats like int8. Desktop GPUs are also driven to power levels and voltages are beyond peak efficiency, so TOPS/watt will be worse than an NPU by design. (I'm sure the overall architecture is also a factor, but don't underestimate the effect of those comparatively simple power/frequency/voltage decisions.)

And of course, software support is still very much in "I'll believe it when I see it" territory.


Couple of things,

1. Meteorlake was the first Intel soc with a NPU, it came out late late last year, if you’re curious.

2. There won’t be a desktop Lunar Lake (well, at least not one for a socketabpe atx desktop, maybe mini pcs). Instead, its 2nd gen (in this advanced SOC era) desktop counterpart is called Arrowlake. Rumors say it will be out next month.

Lunarlake is for ultra portables/ultralight notebooks, and Arrowlake will be used in desktops and beefier laptops.


Thank you; sorry for the misnaming.


No need to apologize, I made the same mistake, just informing.

Seems like 228V vs 238V is just binning on max freq...

If we take into account durability (some may say 125H is the best of the bunch right now...), they can be interesting solutions with a good perf/w ratio.


4/4 sentences in the first paragraph mentioning AI. No real specs. No benchmarks. Yikes.


I read through Samsung’s latest phone presser, more than 80% of the features were AI.

The camera features were almost entirely AI additions to zoom and processing.

It is going to be like this for a while it seems.


I wonder if they have read the study that showed that mentioning AI in a product description significantly reduced intention to buy. It seems intuitively plausible and a significant and measurable effect in the study.

[0] https://news.wsu.edu/press-release/2024/07/30/using-the-term...


Somehow I think even if they believed that, the incentives are still to talk about AI, because it signals something to some kind of investor class, and the stock goes up. It’s pretty maddening, especially when it has the potential to hurt the fundamentals (sales / profits), but this seems to be the cognitive dissonance of the moment.


What will be next I wonder? I really can’t think of anything. This AI phase feels like grasping at straws like when TV tried to do 3d.


Disagree.

LLMs and diffusion models aren’t going anywhere.

As to all the nonsense they’re trying to sell right now, sure; but I think this is an epoch moment like the internet was for most of us.


The internet brought us an entirely new type of worldwide communication between people that wasn't possible before.

Generative AI brought us cringe. It has existed for like 5 years now and it's still a solution in desperate search of problems.


Do you know how exactly your comment reads compared to “the internet is a fad” of the late 90s?

I can select a PDF and ask for facts from it and page numbers where the information is. If you think that isn’t amazing I don’t know what your scale is.

IDK what you’ve been doing with LLM models but it is a major shift even if you can’t see it.


The 90s internet comparison feels pretty apt to me.

In both cases, it's a signal that a correction, maybe even a crash, is going to come due to massive over- and misinvestment.

But like the 90s internet, it's likely going to be extremely transformative in the longer term.

Some people focus on the short term part of this, others on the long term. Integrating both into a coherent perspective isn't that easy.

It's going to be interesting for a while!


> I can select a PDF and ask for facts from it and page numbers where the information is

You can already do this, press ctrl-f and search. Boom, done.

LLMs don't solve new problems much - rather they give you a new INTERFACE into a solution. Natural language.

Before you'd use commands, specific software, processes, formats, configuration. Now you can use natural language (maybe sometimes). To me, this isn't a break through.


Please give me some examples of real new capabilities that LLMs gave people. Something that they weren't able to do before, or something that was wildly impractical before LLMs appeared.


To be fair, it brought not only cringe, also autocomplete on steroids. The architecture is good at applications that need pattern recognition.


Well yes, neural networks are very good for pattern recognitions and data categorization tasks. Also for speech synthesis. I wasn't saying that this is a bad application of AI. I was talking about generative AI and LLMs specifically, the kind that takes prompt strings and spits out text/image/audio/whatever.

> autocomplete on steroids

Maybe if we manage to run LLMs locally, we can put one into a keyboard app and finally have a Russian touchscreen keyboard that doesn't make me want to yeet my phone at a wall for repeatedly failing to type the word in the correct grammatical form when I use gesture input. But somehow we aren't there yet. I guess ChatGPT is more important.


You dont say ! A new fad !


We weathered blockchain. We'll get through this one too.


You think they can combine them?


Absolutely.

Same thing for the new Pixel. They actually were using AI as a feature that made their phones better than Samsung phones.

Instead of each company upping the ante on their camera's, its now going to be another five years of companies pushing new things and tweaks their AI can do better. This is just the newest arms race in smartphones.


https://www.youtube.com/watch?v=d1g1tltlVr0

Its been working this year so far. Erm... not for making money, but for raising money. (Exception: NVidia actually got profits)


Rather cynical comments so far. I personally am very interested to see how this line of chips does, both in terms of performance (really efficiency, for this sort of chip), and market performance. Hopefully things like Lunar Lake, Arrow Lake, etc, and their 18a node all turn out to be as good as some of the early leaks and press releases make them would indicate, because Intel needs some big wins to get back on track.


We spent decades taking leaps and bounds with every chip release. We've now seemingly settled into the incremental improvement phase. The chip makers have responded by burning tons of transistors on extra crap that spends most of it's life powered down.

It's hard not to be cynical.


? since 2016 Ryzen made intel have to compete again and the 2020 M-series from Apple made day long battery life a reality.

CPUs have been very interesting the past 8 years or so.


As someone who writes assembly they really haven't been interesting for a long while.

I think it's all about being realistic.

We made leaps and bounds before because clock speeds were going up 50% or more between generations. Add in architecture improvements and it was easy to see actual performance double from one generation to the next.

But we're struggling to get clocks faster now, and I always imagined that it's because the speed of electricity isn't fast enough. At 6 Ghz, in one clock cycle, light travels only about 6 inches/15 cm. Electricity moves slower than the speed of light, depending on the medium it's going through. At the frequencies we're operating at, I figure that transistor switching speed and clock skew just within the CPU can start to be an issue.

We already have tons of CPU optimizations. Out-of-order execution, branch prediction, register renaming, I could go on. There's probably not much more we can do to improve single-threaded performance. Every avenue for optimizing x86 has been taken.

And so we go multi-core, but that ends up making heat a primary concern. It also relies on your task being parallel.

Or we go ARM, but now some of your software that has had x86-specific optimizations like using AVX-512 has to be rewritten.


Being realistic doesn’t mean a need to be cynical. Leaps and bounds of progress never lasts forever. Incremental improvement is still worth celebrating.

I totally disagree that specialized processing units are wasteful because they spending most of their life powered down. Your iPhone uses the neural engine every time you open the camera app. The announced AI features for the next iOS version will be using on-device AI a lot of times you use Siri - which is used a lot by a lot of people.

The old school version of this would be like if you were dissing multimedia instructions like hardware encoders/decoders. How do you think your laptop so effortlessly plays back 4K video and somehow get better battery life than when you’re working on a Word document? It’s that part of your processor that usually “sits there doing nothing.”

You just don’t realize how much these segments of the chip are accelerating your experience.


You want a chip that never powers down? Boy, have I got a deal for you. zero transistor waste, zero extra crap just like you asked for. It's a 286. Limited availability so gonna have to ask $5000 per chip.


There's nothing wrong with the state of things; however, I'm merely pointing out that the states are significantly different than they used to be and a period of change in expectation might be warranted.

CPUs used to be purely about computing power, now they're about computing accessories, which is a different type of market and purchase all together.

If you can't acknowledge the differences without becoming irrationally aggressive as if I've insulted you personally then this is not going to be a great conversation.


> CPUs used to be purely about computing power,

Yes, and now they are about saving power. If the employer pays the wasted hours, why not.


Does it boot Xenix?


I’ve got an i9-1300K that shit the bed, which I had to replace with AMD.

I opened a return ticket with Intel, after a day+ delay they followed up with questions about the BIOS version. The mother board was no longer in service (see AMD above), so I couldn’t immediately answer their question. I Then they closed the support ticket.

So I have to start from square one, but I may not bother because the value of the time I’ve wasted on this already (10s of hours), vastly exceeds the cost of replacing all my chips with AMD.

There is NOTHING Intel could release that I would buy.


Confirming settings is standard and I had the same with AMD, the support person has to tick their boxes before they can forward you to RMA


They lied to you, you lie back to them. Standard practice amoung politicians and lawyers, and now apparently CEOs.

Favorite tech support help line: "bring me those cheese balls girl." (QMS Mobile Al.) I would if I could.


No problem normally but this is a bad chip, full stop. Regardless, I was going to send the info but they closed the ticket!


They replied to my ticket asking about the motherboard and bios even though I had mentioned both and more in the initial ticket.


And it’s not like it isn’t amply established that it is a chip defect already.

Decent customer support would just send a swap right away, whatever they gain by not fixing this promptly they lose tenfold on lost future business.

Intel is in a death spiral.


It may not even be humans that deal with those reports. That may also be why they think everything is fine and their users are dumb.


I'm still holding out a little hope that they release a competitive high end GPU, since their software stack is both good and fully featured on Linux.


Peak performance is already good enough for most people. It's performance per watt where they still lag. Intel needs a Ballmer-like figure to remind their engineers, "Battery life, battery life, battery life."


Performance per watt and battery life are only somewhat related. For laptops idle and sleep power usage are far more important. For example the AMD AI 300 chips have better performance per watt than Snapdragon X Elite but have worse battery life.


That's exactly what they're claiming to deliver.


> Peak performance is already good enough for most people

Citation needed. My work laptop is still crap when it is time to do real work.


If you would have told me in the 1980's we would still be using x86 based chips in 2024 I would have laughed at you. I bet DOS runs really really fast on them.


Running or virtualizing DOS-era software can be non-trivial: see, for example, Windows 98 vs TLB cache invalidation behavior in modern CPUs https://blog.stuffedcow.net/2015/08/win9x-tlb-invalidation-b... requiring a patch. Any code that does loop counting to estimate cycles for sleep() will also overflow (hello, Pascal CRT Error 200!) requiring a different patch.


> If you would have told me in the 1980's we would still be using x86 based chips in 2024 I would have laughed at you.

In the late 1970s, Gordon Moore believed that their next ISA would last the lifetime of the company. Now, technically, he thought their next ISA would be in the 8800. But it was such a drawn out failure they came up with the 8086 in an emergency 3-week sprint.

https://dl.acm.org/doi/10.1145/3282307


Instruction set wars were a 1970s concept anyways. The way x86 was built in the 80s vs. the way it's built today shows that it was a complete misnomer to begin with. It turns out instruction decoding really is the least important part of the pipeline.


Is it? My compute-bound tight loops on x86-64 are often decode-bound.


How tight are those loops really and how complex are your instructions? I'd suspect you're blowing out the cache and leaning on a single decoder due to the nature of the instructions. Particularly if these are SIMD instructions even if you sped up or paralleled the decoders you'd be up against later pipeline latencies very quickly anyways, at that point, are you actually measuring a real bound?


We did go from 16 to 32 and now to 64 bit, so there have been some significant changes in that time, plus all the new instructions added. But ARM chips are taking over, I think there are now more than 10 ARM chips for every x86 one sold


There's also an ARM CPU inside modern x86 processors, little-known fact. Look up AMD's Secure Processor.


ARM chips were widespread already long before they reached performance parity with x86, and actually already before the smartphone market took off.

There always have been vastly more chips of another architecture for every x86 chip. Most of the output of fabs with older processes are microcontrollers, most of which use some bespoke instruction set.


A lot lot more than a factor 10 if you count microcontrollers.


And then the punchline: computers have gigabytes of RAM, multiple cores with multiple GHz, yet they run slow.


Computers run very fast, it's software that runs slow.


I'd be curious to see how many clock cycles something as simple as "x = y + z" (Where all three variables are integers) takes in various languages.

The compiled languages would likely output a single MOV and ADD and get done in 2 cycles (plus any time to fetch from memory). Something like Python probably takes a couple hundred with all its type checking. JIT languages I would think would take a couple hundred the first time the line gets executed, but then have a single MOV and ADD ready the next time, unless I'm completely misunderstanding JIT.


Software is fast. It’s the user that is slow


Users are very fast, it's the muscles that are slow


They're only x86 for marketing purposes.


So I can't run DOS on them in real mode anymore?


Technically, I think you can’t anymore?

https://www.intel.com/content/www/us/en/developer/articles/t...

Intel dropped support for 16 bit booting with their firmware from a few years ago. And with x86S are looking to drop support all together and push people to virtualization


Is there a 30% improvement in Time-to-Degradation from bad voltage management?


Maybe they shouldn't be using "Breakthrough Performance" in their ad copy...


In the 20 or so years I've owned laptops, I think I only upgraded my memory once or twice, and yet I still can't warm up to integrated memory.


I've upgraded memory on every laptop I owned. In Serbia (and I assume other smaller markets) you can't just freely configure your laptop, you pick from a selection that was imported, and you typically have to make some kind of compromise between performance, display, keyboard, etc. Having option to upgrade RAM or disk increases your choices in other variables.

Last year I bought MacBook Air, and only options available immediatelly were 8/256, 8/512 and 16/256. Since I wanted more RAM and more SSD I had to wait 2 months for delivery.


For some reason upgrading memory seems more cost efficient than just buying a better computer. For example I bought a mini PC for 140 EUR and upgraded to 32GB for additional 80 EUR. Meanwhile actual 32GB mini PCs with rougly equivalent CPU start at much higher prices.


I've done it more than that because I've been buying refurbished or used laptops (generally ThinkPads) and maxing them out cheaply. If I were buying a new flagship machine every 5 years or so I doubt it would matter to me. But that used market is extremely important for keeping this stuff out of landfill for a few more years.


Counterpoint: I've maxxed out memory in all my laptop and desktop pcs so far, even the machines for secondary uses I max them out using budget parts.


Yikes, a 32GB RAM limit seems a bit low in 2024, even for a laptop.


Can you get more than that in any thin and light laptop today? Even the macbook air maxes out at 24 GB.

Wait for the more power hungry SKUs. Their previous gen supported up to 96 GB of ram on the higher end SKUs.


Macbook pro's are up to 128GB.


They are not "light," though.


1.62kg pro vs 1.24kg air? For me that's the same class.


A third heavier, and it really feels heavy for its size. It's one of the things that made me decide not to get a Mac. I don't know why they use so much aluminum.


They are super sturdy. It's a single piece of metal that is carved into shape. And aluminium is also very easy to recycle.

I personally preferred the design of the first gen retina Macbook Pro. It felt so sleek and thin. The current design is a bit too chunky and boxy for my taste.


If we're talking æsthetics, I find they all look like silver blimps (glad they're finally shipping darker tones, but it's an affectation). I've seen a lot of dented Macbooks. I don't think they're any sturdier than any other well built laptop, which last until they're obsolete (5 - 10 years). The alu does act as a heat sink, but I doubt it's necessary for the entire body.

I'm sticking with Thinkpads for now, I like the function-dictates brutality of their design and I think carbon/magnesium and some plastic is a good approach for a much lighter result. A Macbook Pro is not only heavy, but slippery. A lot of carbon is used recycling and even shipping aluminum, and the decreasing factor of being able to easily upgrade/repair components. I don't know ultimately what the environmental impact is between the materials, but Apple has advantages of consistency and scale; as much as I like Thinkpads, I don't like that there are a dozen different models each year, which would be impossible to effectively recycle even if they had a program in place.

We're pretty much into a blog here (here's a picture of a beach), but I tried a Macbook, had to return it because of ergonomic factors including weight; I got a pretty great 16" Thinkpad with the same weight as the 14" MBP, I don't even want to think about the weight of the 16" MBP. It's frustrating other top tier companies or the industry can't find a way to have efficient product cycles (Framework is getting there). I guess it doesn't help that Apple has patented their unibody design, which shows how much they care about environment in the larger sense.


I'm glad you found something you enjoy :)


Ahh I miss anandtech already :)


It seems like they didn't mention about process node.

Maybe all of its tiles are based on TMSC as some news revealed months ago, so they didn't want to discuss about this?


Wikipedia is saying:

Compute tile - TSMC N3B

Platform Controller Tile - TSMC N6

Foveros Interposer Base Tile - Intel 22FFL

https://en.wikipedia.org/wiki/Lunar_Lake


I like AnandTech's tag line (rip): Lunar Lake: Designed By Intel, Built By TSMC (& Assembled By Intel)

https://www.anandtech.com/show/21425/intel-lunar-lake-archit...


It works so well for other company: Designed in California, made in China.


> TSMC

so, intel officially lost fab race..


https://www.anandtech.com/show/16823/intel-accelerated-offen...

According to their own roadmap from a few years ago they should have been able to use their own 20A process node for this CPU.


Easy to miss when the product is named Core: they are really mean cores this time. No SMT.


Upside: you can have new cores, which are faster.

Downside: oh you thought performance, no we meant “even faster to burn out”.


I want to know its GB6 ST Benchmarks and its Wattage used during the run. i.e I want to compare GB6 ST / Watts figures.

GPU / Xe2 is difficult to measure. Because 99% of the value are from Drivers. Either we get a very very wide range of test, or we have to judge it from something else.

Cost ~ Perhaps the most important because Qualcomm is extremely price competitive. They are used to competing in Smartphone space which has a very different set of margins. Intel will need to face the new reality the good days aren't coming back.

And it has a hardware VVC Decoder! Cant wait to see reviews on it. The problem is Anandtech is gone. I need to figure out which site to go next.


This processor series seems great for thin-and-light laptops and handheld gaming devices, yet Intel has decided to only include Thunderbolt 4 instead of Thunderbolt 5.

This limits the longevity, upgradability, and relevance of gaming products using Lunar Lake.


> This limits the longevity, upgradability, and relevance of gaming products using Lunar Lake.

Darn thing has RAM in package and your worried not having thunderbolt 5 limits upgradability? :P


Yeah. GPU is the only thing you'd want to upgrade in the near future, and Thunderbolt 4 already throttles performance significantly. These CPU results are pretty formidable to go along with a full-fat GPU over 120/40Gbps Thunderbolt 5.


> GPU is the only thing you'd want to upgrade in the near future

agree to disagree


> yet Intel has decided to only include Thunderbolt 4 instead of Thunderbolt 5.

Well, they weren't sure if Thunderbolt 8 or 9 will be appropiate so they sticked with 4. /s


Power consumption sounds really good. Would be great to get this on a micro-ITX board with 4 SATA ports to replace my 10 year old home server. :-)


Or just use a laptop and you have a built in UPS and a backup keyboard and display already attached. If you can find one with Thunderbolt you should have the bandwidth for quite a few hard drives.


Is the integrated memory only for the laptop chips or do you think this will also come to desktops?


My guess would be that it won't - Arrow Lake (different codename ~= significant process difference) is coming later this year for desktop and high TDP mobile and the vibe I'm getting from Intel's materials and media coverage is that it will have conventional non-integrated memory. Plus two of the major advantages of moving the memory onboard are more bandwidth to feed the integrated graphics and lower power consumption, neither of which is much of a concern on desktop.


PoP memory doesn’t really have higher bandwidth.


It's only for low-end laptops.


Why is higher performance only for low-end laptops?


On-package memory isn't faster. High-end laptops will have Arrow Lake with more cores and the same memory performance but it will be on the motherboard. Desktops should also have good memory performance with CUDIMMs.


Because that's where they can deliver.


This probably will slow down ARM


No AVX-512, I think I'll pass.


So how long until we find out what fatal hardware shortcut was employed to achieve this performance?


The performance isn't anything impressive. Power efficiency is probably a real step forward for Intel, but that's to be expected: they finally, for a moment, stopped believing their own lies about their fab capabilities and outsourced the whole thing to TSMC (except for the passive interposer, which Intel is making in-house). But not the latest and greatest TSMC process; no, this is apparently last year's disappointing (by TSMC standards) and expensive N3B process, not the newer N3E.

A few years from now, this will either be an embarrassment Intel tries to hide from the history books (much like how they currently treat Cannonlake), or it'll be looked back upon as a turning point and the beginning of the end of Intel having fabs and chip design in the same company.


Breakthrough performance, but requires a 1500W PSU, right?


Snarky and incorrect. This should compare very favorably (performance and efficiency) with Apple's M3 chip.

Slightly worse CPU ( https://browser.geekbench.com/v6/cpu/compare/7483669?baselin... ), much better GPU and NPU, similar/slightly worse efficiency, 33% more RAM capacity.


>Snarky and incorrect

That's like 80% of comments in the thread. Unfortunately HN his is not the place for sane discussion. It's a place for people to vent their Intel hate.


HN has changed significantly. Some years ago I quoted The Last Psychatrist and we had a discussion about the points. Lately I quoted TLP and was insulted and downvoted. People are not interested in learning new things, but following whatever cult they believe in.


The question is, how do you treat this sickness in society? Truth doesn't work, so what's the answer?


That's not society issue but a community issue. The issue is solved when the community devolves over time and gets shitty enough to the point that members that care for informative conversations leave for greener pastures to new communities, leaving the dross behind to stay in their cult echo chamber. It happens to every social media platform and it's happening to HN.


It's a society issue as well.


I think there are waves in society, the pendulum will swing back. I don't think there is a treatment.


> much better GPU and NPU

I don't know about the Apple ecosystem, but have you seen ANYTHING using the NPU on PC? I have not. I own an AMD laptop with an NPU (Ryzen 9 8945HS) and the NPU has never seen a single percentage of utilization since the laptop was unboxed and put to use. And I actually have an interest in local AI, but all the stuff I use (like Ollama or ComfyUI) run on the GPU, even if they had support for the NPU (I do not think they do) I would not run that stuff on the NPU because it's just not competitive with the nvidia gpu that's also on my laptop.

To me, seeing intel and AMD include this sort of useless thing is anger inducing. I am paying for this. I want every inch of that silicon to be useful. Not detrimental waste of space, like the NPU.

Seeing "better NPU" in a sentence meant to market a CPU doesn't elicit positive emotions.

In the windows world, the one thing that might end up using an NPU is also the thing most people do not want: Windows Recall. And that feature, for now, is exclusive to Qualcomm ARM PCs, current x86-64 NPU owners can't get it.


> seeing intel and AMD include this sort of useless thing is anger inducing. I am paying for this.

So don't pay for it. No one is making you. Wait for a model that doesn't have an NPU, or buy an older model that doesn't. It's not like it won't still be fast enough.


>Wait for a model that doesn't have an NPU

How many years from now? There isn't any high end CPU in laptops without those useless things now.

> buy an older model that doesn't

I don't think you've ever shopped for laptops, or you're lucky and live in a country that is particularly plentiful for choices in PCs. Looking for the specific combination of having 32gb of ram, 1tb of SSD, an AMD CPU (with Intel's current manufacturing woes I was not willing to gamble), an NVIDIA GPU with a minimum of 8 gb of vram took far more efforts than I am normally to spend doing activities like shopping. And now you tell me "do all that while looking for a model that predates NPUs"?

Of course I could order online from god knows where but I like buying from retailers that are known to honor their warranty well and good since there's always the possibility of buying lemons and I don't feel like wasting time shipping crap myself when I could just exchange it in place if it happened.


> How many years from now?

Entirely up to you. Point is, vote with your money.

> There isn't any high end CPU in laptops without those useless things now.

Even not considering processors from Intel/AMD?

> I don't think you've ever shopped for laptops,

I've purchased 6 in my life.

> Looking for the specific combination of having 32gb of ram, 1tb of SSD, an AMD CPU (with Intel's current manufacturing woes I was not willing to gamble), an NVIDIA GPU with a minimum of 8 gb of vram took far more efforts than I am normally to spend doing activities like shopping.

So be less picky or find a laptop that lets up upgrade the parts.

> I don't feel like wasting time shipping crap myself when I could just exchange it in place if it happened.

Fine, but this is a compromise you are willing to make, just like paying for the NPU. That's my point.

You don't need a super modern laptop.


Where are you getting better GPU benchmarks from? Afaik there’s not been public graphics benchmarks and Intel didn’t compare against Apple. Apples GPUs have generally been class leading for integrated graphics. I’d be surprised if Intel improved dramatically here, as their iGPU has been quite anaemic till recently.

Intel’s NPU is better but as noted in a thread higher up their average package wattage is a little over double (37 vs 15W) for a 20% performance claim.


You can start here... https://www.cpu-monkey.com/en/cpu_benchmark-bench_11

M3 is around 3.5 TeraFLOPS, Lunar Lake is 5.2-6.5 TeraFLOPS. I'm sure more detailed benchmarks will be coming up soon, but realistically, there is no way to make up that gap.

Apple here says the Macbook Pro has 18 TOPS (compared to Lunar Lake's 48 TOPS)...it's not really in the same league. https://www.apple.com/macbook-pro/


You can’t directly compare teraflops for GPU performance between different architectures unless you really only care about a single precision throughput, which is not a good metric. You do actually need real world graphics benchmarks to compare GPUs.

You also can’t compare NPU TOPs without knowing the baseline data type. Apple for the M3 uses FP16 whereas Intel uses INT8. You have to double the Apple number to get the raw data throughput (ignoring any other efficiencies for operations in different types).

It’s ~36 vs 48. So closer to 33% more for 100% more power use (impossible to measure just the NPU use though). The more comparable SOC for power use would be the M3 Pro


Gaming is essentially all done with FP32...so it's by far the best figure of merit (excluding issues with say RDNA3 dual issue which is rarely achieved in practice).

You will see gaming benchmarks come out soon, and Lunar Lake will be about 50% faster than the M3. (A secondary issue of course is how few latest gen games run on macOS....)

True, that's FP16, but it's not clear if M3's Neural Engine even supports INT8.

I'm sure M4 will make this much more competitive, but right now, Lunar Lake is overall a much more balanced architecture that most people would prefer, ceteris paribus....


Gaming is absolutely not all done with fp32. A lot of games actually target half precision , which is where most PowerVR based GPUs pull ahead. The majority of shaders and buffers are better suited for half.

It also ignores things like occupancy and memory throughput, among many other aspects of a GPU.

I think a 50% delta for GPU is very wishful thinking given even Intel are only claiming a 33% uplift versus meteor lake, which itself was behind the M3 line when compared against similar TDP.

Regarding the NPU, the M3 does support INT8. It’s just that between the M3 and M4 release, the rest of the industry started coalescing on INT8, hence the change in base type.

I expect the same will happen again now that NVIDIA are touting INT4 as their performance standard for marketing.


Intel Arc runs FP16 at 2:1 compared to FP32, and Battlemage on Lunar Lake is the same, and XMX FP16 is actually at 8:1. I don't think M3's GPU has a better ratio.

Of course there are many other aspects, but given it's Intel's latest architecture, which has improved efficiency tremendously (see https://cdrdv2-public.intel.com/824434/2024_Intel_Tech%20Tou... ) it's pretty unlikely M3 has any fundamental advantage.

Do you have any reference showing Neural Engine in M3 supports INT8 (and at 2x FP16? Just curious.)


I’m not saying the M3 has a fundamental advantage. I’m saying that it’s unlikely to be as high a difference as is being stated in real world use. I don’t think a SOC at half the power budget is going to be magically more powerful.

Regarding INT8, the frontend for the neural engine is CoreMLtools and it’s supported INT8 for a while , though their page does say the M4 has new int8-int8 acceleration https://apple.github.io/coremltools/docs-guides/source/opt-o...

And one of the contributors to the repo saying the Neural Engine supports Int8 https://github.com/apple/coremltools/issues/929#issuecomment...


Possibly Battlemage running at 100% will use more power than M3's GPU running at 100%...it will take some detailed testing to track that+ Lunar Lake can be set at different TDP's (plus performance settings, on battery vs connected to power). Not to mention different "100%" GPU workloads.

At the end of the day though, users vastly prefer a more powerful built in GPU for the occasional game session...Intel is willing to pay for the transistors, and Apple reserves them for the M3 Pro instead.

Nice to see ANE supports that...good to know!


I’m not sure this tracks because you’re not comparing like for like

> Intel is willing to pay for the transistors, and Apple reserves them for the M3 Pro instead.

Apple is also willing to pay for it. You just happen to be comparing the higher tier Intel to the lower tier Apple chip.

Intel just doesn’t have a suitable answer in that tier level yet because they haven’t launched the Core Ultra 3.

If you were to map the Intel levels to Apple, they’d roughly line up like so (ignoring Intels power delineated lines):

Core 3 -> base M series

Core 5 -> M pro

Core 9 -> M Max

The Ultra 9 288V you quote is their highest spec device and has a recommended range of 17-37W.

The Ultra 5 226V is the closest to an M3 at 8-37W but loses a lot of the performance numbers you quote and still consumes more power as a whole.


The cheapest Apple with an M3 Pro is https://www.apple.com/shop/buy-mac/macbook-pro/14-inch-space... which is $1999 (with 18 GB RAM and 512 GB SSD).

Full pricing on Lunar Lake is not available yet, but for example, XPS 13 with with an Ultra 7 is $1399 (16 GB RAM and 512 GB SSD) https://www.dell.com/en-us/shop/dell-computer-laptops/new-xp...

Thinkpads will probably be a bit more...Acer a bit less...Asus will probably be around the same or less.

Here's a high spec Asus (32 GB RAM, 1 TB SSD). https://shop.asus.com/us/90nb14f4-m00620-asus-zenbook-s-14-u... for $1499... Apple's equivalent is $2,599!


At that point you’re comparing different products as a whole not the SoC. At that point it becomes a significantly different discussion.

Neither of those laptops you linked are comparable to the MacBook Pro on a number of points, primarily the display.

Just like Intel doesn’t have an M3 competitor out, Apple doesn’t have a competitor for the lower end of premium laptops.


The Asus screen is very close (only significant difference is brightness), AND a touchscreen!

https://www.youtube.com/watch?v=JnJw54oyfLE

14.0-inch, 3K (2880 x 1800) OLED 16:10 aspect ratio, 0.2ms response time, 120Hz refresh rate, 500nits HDR peak brightness, 100% DCI-P3 color gamut, 1,000,000:1, 1.07 billion colors, PANTONE Validated, Glossy display, 70% less harmful blue light, SGS Eye Care Display, Touch screen, (Screen-to-body ratio)90%, With stylus support

vs

14.2-inch (diagonal) Liquid Retina XDR display;1 3024-by-1964 native resolution at 254 pixels per inch

1,000,000:1 contrast ratio XDR brightness: 1000 nits sustained full-screen, 1600 nits peak2 (HDR content only) SDR brightness: 600 nits Color

1 billion colors Wide color (P3) True Tone technology Refresh rates

ProMotion technology for adaptive refresh rates up to 120Hz Fixed refresh rates: 47.95Hz, 48.00Hz, 50.00Hz, 59.94Hz, 60.00Hz


Ultra 200V is a good model name. Interested to see how hot they are running them for the performance crown. I suspect we'll still be seeing 275w+


It's for laptops, so no, it won't be 275W. Peak is 37W: https://videocardz.com/newz/intel-unveils-core-ultra-200v-lu...


Apologies, somehow I didn't finish the comment with ' on desktop', regarding the new arch these are all based on.


How many volts is it though?


1 volt, 275 amps


I had to double take 200 volts.

Edit: According to the other commenter not just me..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: