This is an efficient laptop series with new cores capable of low idle and near idle power draw, with an integrated GPU analogous to AMD's 780M (looking forward to benchmarks.)
The AI comments in the PR are due to the package devoting space to a neural processor. They are claiming up to 48 TOPS which exceeds Apple's M4's reported 38 TOPS.
Similarly to Apple Silicon ARM, RAM is incorporated into the package. It has 4 p cores and 4 e cores like Apple's M1.
It's Intel, so looking forward to benchmarks and real world tests, and they will need to have a good desktop version of these packages later this year, but this new architecture appears to be headed in the right direction, finally.
Good point. It seemed weird for this to be higher ops/sec and you pointed out why. If it's true that Apple releases the rest of M4 in November, and Intel releases the desktop architecture in October-November, we'll have full comparisons.
Where are you getting that the NPU draws up to 37W?
I believe that is full package power, which Intel states is roughly the combined CPU+GPU+NPU TOPS of 120.
This is a bit of word salad from Intel's marketing, with a catch buried inside the messaging.
> up to 120 total platform TOPS (tera operations per second) across central processing unit (CPU), graphic processing unit (GPU) and neural processing unit (NPU) to deliver the most compatible and performant AI experiences across models and engines.
Notice they say "most compatible and performant AI across models and engines", this means you don't really get to add all the TOPS up and run them as a unified workload. That sum of 120 is marketing speak, without doubt the CPU will hit ~30w alone just running the 8 cores at full speed, and leave little power for its GPU and NPU. Even with their 50% (claimed) efficiency gain.
They already do this today with their existing laptop chips, so I highly doubt they have changed.
Instead, you get different power targets, TOPS and efficiency depending on the workload. It's the same with Apple, we don't actually know how much wattage the NPU used during their claimed 38 TOPS.
Package power is about all we got until the chip comes out. It's not like Apple and Intel are putting 2w GPUs and 2w NPUs on a 37w package (at least not anymore). They likely utilize 30-50% of the available power.
This is how I read it anyway, I'd love someone else to weigh in and tell me if its different from previous press releases.
The 37 W is only the short-term power, sustainable for a few seconds, perhaps for a half of minute.
Lunar Lake is optimized for a steady-state package power of 17 W and it is likely that most devices with it will use this value as the default power limit.
The 17 W power limit has been essential in determining the characteristics of Lunar Lake.
A CPU designed for 37 W would have looked very different, by being similar to AMD Strix Point (which is optimized for 28 W or more), i.e. by having more CPU cores, a beefier GPU and more PCIe lanes and other peripheral interfaces.
I can imagine that at some point in the next year or two we'll see games that use the GPU for rendering and then the NPU for upscaling, but I don't think that's actually happening yet. As far as I am aware, none of the fancy "AI upscaling" technologies used by games are running on anything other than the GPU so far.
I think gaming has many exceptional use cases for an NPU in the next “few”-to-“several” years.
One big one I see coming:
Dynamic Voiceover.
Imagine instead of a generic filler name, each voice actor’s generative model could speak your character’s nam3 in video game scenes. Now imagine generating entirely novel conversations with highly nuanced NPCs.
Anyone who takes Apple 38 TOPS claim seriously is misled, just like that time Apple compared its iGPU with 3070 Laptop without specifying. Most GPU do couple times over 38, Apple iGPU likely included. Even Intel CPU does more, that's a low bar.
NPU is a "background AI" device. It performs better than CPU and worse than GPU for low end tasks, without having to wake up messy GPU. It's not supposed to be fast at all.
Comparing SoCs by advertized TOPS figures is like comparing cars with reversing speeds. That sometimes matter, rarely the most relevant parameter.
I am not seeing the misleading part. A neural network on processor/package is supposed to be good at preserving battery by doing all the low power, low precision tasks. More TOPS = more consistent battery life with heavier usage over the day, is how I'd explain it to an everyday user.
Some people think 38 TOPS claimed was itself some sort of groundbreaking achievement, despite it being just a lesser substitute to iGPU, closer to a biggest shipped E-core so far or fastest on-chip video encoder back then. That's the misleading part.
Saying that the NPU is a lesser substitute to the GPU is only accurate if you believe raw performance is the only important metric. Which means you're still missing the entire point of NPUs.
For comparison, my 7900 XTX, which is the most powerful card that AMD offers, hits a (specced) peak of 120 tflops with fp16 operations, at 300W.
However, when they say "TOPS" instead of "TFLOPS" that usually means something like int8, and it's unclear if this chip will support any float format, so with most networks you'll have to quantize first. Not sure what overhead that adds to get the same quality.
Two points about the GPU comparison. Modern GPUs, including the 7900 XTX, have tensor cores which give higher TOPS, especially for smaller data formats like int8. Desktop GPUs are also driven to power levels and voltages are beyond peak efficiency, so TOPS/watt will be worse than an NPU by design. (I'm sure the overall architecture is also a factor, but don't underestimate the effect of those comparatively simple power/frequency/voltage decisions.)
And of course, software support is still very much in "I'll believe it when I see it" territory.
1. Meteorlake was the first Intel soc with a NPU, it came out late late last year, if you’re curious.
2. There won’t be a desktop Lunar Lake (well, at least not one for a socketabpe atx desktop, maybe mini pcs). Instead, its 2nd gen (in this advanced SOC era) desktop counterpart is called Arrowlake. Rumors say it will be out next month.
Lunarlake is for ultra portables/ultralight notebooks, and Arrowlake will be used in desktops and beefier laptops.
Seems like 228V vs 238V is just binning on max freq...
If we take into account durability (some may say 125H is the best of the bunch right now...), they can be interesting solutions with a good perf/w ratio.
I wonder if they have read the study that showed that mentioning AI in a product description significantly reduced intention to buy. It seems intuitively plausible and a significant and measurable effect in the study.
Somehow I think even if they believed that, the incentives are still to talk about AI, because it signals something to some kind of investor class, and the stock goes up. It’s pretty maddening, especially when it has the potential to hurt the fundamentals (sales / profits), but this seems to be the cognitive dissonance of the moment.
Do you know how exactly your comment reads compared to “the internet is a fad” of the late 90s?
I can select a PDF and ask for facts from it and page numbers where the information is. If you think that isn’t amazing I don’t know what your scale is.
IDK what you’ve been doing with LLM models but it is a major shift even if you can’t see it.
> I can select a PDF and ask for facts from it and page numbers where the information is
You can already do this, press ctrl-f and search. Boom, done.
LLMs don't solve new problems much - rather they give you a new INTERFACE into a solution. Natural language.
Before you'd use commands, specific software, processes, formats, configuration. Now you can use natural language (maybe sometimes). To me, this isn't a break through.
Please give me some examples of real new capabilities that LLMs gave people. Something that they weren't able to do before, or something that was wildly impractical before LLMs appeared.
Well yes, neural networks are very good for pattern recognitions and data categorization tasks. Also for speech synthesis. I wasn't saying that this is a bad application of AI. I was talking about generative AI and LLMs specifically, the kind that takes prompt strings and spits out text/image/audio/whatever.
> autocomplete on steroids
Maybe if we manage to run LLMs locally, we can put one into a keyboard app and finally have a Russian touchscreen keyboard that doesn't make me want to yeet my phone at a wall for repeatedly failing to type the word in the correct grammatical form when I use gesture input. But somehow we aren't there yet. I guess ChatGPT is more important.
Same thing for the new Pixel. They actually were using AI as a feature that made their phones better than Samsung phones.
Instead of each company upping the ante on their camera's, its now going to be another five years of companies pushing new things and tweaks their AI can do better. This is just the newest arms race in smartphones.
Rather cynical comments so far. I personally am very interested to see how this line of chips does, both in terms of performance (really efficiency, for this sort of chip), and market performance. Hopefully things like Lunar Lake, Arrow Lake, etc, and their 18a node all turn out to be as good as some of the early leaks and press releases make them would indicate, because Intel needs some big wins to get back on track.
We spent decades taking leaps and bounds with every chip release. We've now seemingly settled into the incremental improvement phase. The chip makers have responded by burning tons of transistors on extra crap that spends most of it's life powered down.
We made leaps and bounds before because clock speeds were going up 50% or more between generations. Add in architecture improvements and it was easy to see actual performance double from one generation to the next.
But we're struggling to get clocks faster now, and I always imagined that it's because the speed of electricity isn't fast enough. At 6 Ghz, in one clock cycle, light travels only about 6 inches/15 cm. Electricity moves slower than the speed of light, depending on the medium it's going through. At the frequencies we're operating at, I figure that transistor switching speed and clock skew just within the CPU can start to be an issue.
We already have tons of CPU optimizations. Out-of-order execution, branch prediction, register renaming, I could go on. There's probably not much more we can do to improve single-threaded performance. Every avenue for optimizing x86 has been taken.
And so we go multi-core, but that ends up making heat a primary concern. It also relies on your task being parallel.
Or we go ARM, but now some of your software that has had x86-specific optimizations like using AVX-512 has to be rewritten.
Being realistic doesn’t mean a need to be cynical. Leaps and bounds of progress never lasts forever. Incremental improvement is still worth celebrating.
I totally disagree that specialized processing units are wasteful because they spending most of their life powered down. Your iPhone uses the neural engine every time you open the camera app. The announced AI features for the next iOS version will be using on-device AI a lot of times you use Siri - which is used a lot by a lot of people.
The old school version of this would be like if you were dissing multimedia instructions like hardware encoders/decoders. How do you think your laptop so effortlessly plays back 4K video and somehow get better battery life than when you’re working on a Word document? It’s that part of your processor that usually “sits there doing nothing.”
You just don’t realize how much these segments of the chip are accelerating your experience.
You want a chip that never powers down? Boy, have I got a deal for you. zero transistor waste, zero extra crap just like you asked for. It's a 286. Limited availability so gonna have to ask $5000 per chip.
There's nothing wrong with the state of things; however, I'm merely pointing out that the states are significantly different than they used to be and a period of change in expectation might be warranted.
CPUs used to be purely about computing power, now they're about computing accessories, which is a different type of market and purchase all together.
If you can't acknowledge the differences without becoming irrationally aggressive as if I've insulted you personally then this is not going to be a great conversation.
I’ve got an i9-1300K that shit the bed, which I had to replace with AMD.
I opened a return ticket with Intel, after a day+ delay they followed up with questions about the BIOS version. The mother board was no longer in service (see AMD above), so I couldn’t immediately answer their question.
I
Then they closed the support ticket.
So I have to start from square one, but I may not bother because the value of the time I’ve wasted on this already (10s of hours), vastly exceeds the cost of replacing all my chips with AMD.
There is NOTHING Intel could release that I would buy.
Peak performance is already good enough for most people. It's performance per watt where they still lag. Intel needs a Ballmer-like figure to remind their engineers, "Battery life, battery life, battery life."
Performance per watt and battery life are only somewhat related. For laptops idle and sleep power usage are far more important. For example the AMD AI 300 chips have better performance per watt than Snapdragon X Elite but have worse battery life.
If you would have told me in the 1980's we would still be using x86 based chips in 2024 I would have laughed at you. I bet DOS runs really really fast on them.
Running or virtualizing DOS-era software can be non-trivial: see, for example, Windows 98 vs TLB cache invalidation behavior in modern CPUs https://blog.stuffedcow.net/2015/08/win9x-tlb-invalidation-b... requiring a patch. Any code that does loop counting to estimate cycles for sleep() will also overflow (hello, Pascal CRT Error 200!) requiring a different patch.
> If you would have told me in the 1980's we would still be using x86 based chips in 2024 I would have laughed at you.
In the late 1970s, Gordon Moore believed that their next ISA would last the lifetime of the company. Now, technically, he thought their next ISA would be in the 8800. But it was such a drawn out failure they came up with the 8086 in an emergency 3-week sprint.
Instruction set wars were a 1970s concept anyways. The way x86 was built in the 80s vs. the way it's built today shows that it was a complete misnomer to begin with. It turns out instruction decoding really is the least important part of the pipeline.
How tight are those loops really and how complex are your instructions? I'd suspect you're blowing out the cache and leaning on a single decoder due to the nature of the instructions. Particularly if these are SIMD instructions even if you sped up or paralleled the decoders you'd be up against later pipeline latencies very quickly anyways, at that point, are you actually measuring a real bound?
We did go from 16 to 32 and now to 64 bit, so there have been some significant changes in that time, plus all the new instructions added. But ARM chips are taking over, I think there are now more than 10 ARM chips for every x86 one sold
ARM chips were widespread already long before they reached performance parity with x86, and actually already before the smartphone market took off.
There always have been vastly more chips of another architecture for every x86 chip. Most of the output of fabs with older processes are microcontrollers, most of which use some bespoke instruction set.
I'd be curious to see how many clock cycles something as simple as "x = y + z" (Where all three variables are integers) takes in various languages.
The compiled languages would likely output a single MOV and ADD and get done in 2 cycles (plus any time to fetch from memory). Something like Python probably takes a couple hundred with all its type checking. JIT languages I would think would take a couple hundred the first time the line gets executed, but then have a single MOV and ADD ready the next time, unless I'm completely misunderstanding JIT.
Intel dropped support for 16 bit booting with their firmware from a few years ago. And with x86S are looking to drop support all together and push people to virtualization
I've upgraded memory on every laptop I owned. In Serbia (and I assume other smaller markets) you can't just freely configure your laptop, you pick from a selection that was imported, and you typically have to make some kind of compromise between performance, display, keyboard, etc. Having option to upgrade RAM or disk increases your choices in other variables.
Last year I bought MacBook Air, and only options available immediatelly were 8/256, 8/512 and 16/256. Since I wanted more RAM and more SSD I had to wait 2 months for delivery.
For some reason upgrading memory seems more cost efficient than just buying a better computer. For example I bought a mini PC for 140 EUR and upgraded to 32GB for additional 80 EUR. Meanwhile actual 32GB mini PCs with rougly equivalent CPU start at much higher prices.
I've done it more than that because I've been buying refurbished or used laptops (generally ThinkPads) and maxing them out cheaply. If I were buying a new flagship machine every 5 years or so I doubt it would matter to me. But that used market is extremely important for keeping this stuff out of landfill for a few more years.
A third heavier, and it really feels heavy for its size. It's one of the things that made me decide not to get a Mac. I don't know why they use so much aluminum.
They are super sturdy. It's a single piece of metal that is carved into shape. And aluminium is also very easy to recycle.
I personally preferred the design of the first gen retina Macbook Pro. It felt so sleek and thin. The current design is a bit too chunky and boxy for my taste.
If we're talking æsthetics, I find they all look like silver blimps (glad they're finally shipping darker tones, but it's an affectation). I've seen a lot of dented Macbooks. I don't think they're any sturdier than any other well built laptop, which last until they're obsolete (5 - 10 years). The alu does act as a heat sink, but I doubt it's necessary for the entire body.
I'm sticking with Thinkpads for now, I like the function-dictates brutality of their design and I think carbon/magnesium and some plastic is a good approach for a much lighter result. A Macbook Pro is not only heavy, but slippery. A lot of carbon is used recycling and even shipping aluminum, and the decreasing factor of being able to easily upgrade/repair components. I don't know ultimately what the environmental impact is between the materials, but Apple has advantages of consistency and scale; as much as I like Thinkpads, I don't like that there are a dozen different models each year, which would be impossible to effectively recycle even if they had a program in place.
We're pretty much into a blog here (here's a picture of a beach), but I tried a Macbook, had to return it because of ergonomic factors including weight; I got a pretty great 16" Thinkpad with the same weight as the 14" MBP, I don't even want to think about the weight of the 16" MBP. It's frustrating other top tier companies or the industry can't find a way to have efficient product cycles (Framework is getting there). I guess it doesn't help that Apple has patented their unibody design, which shows how much they care about environment in the larger sense.
I want to know its GB6 ST Benchmarks and its Wattage used during the run. i.e I want to compare GB6 ST / Watts figures.
GPU / Xe2 is difficult to measure. Because 99% of the value are from Drivers. Either we get a very very wide range of test, or we have to judge it from something else.
Cost ~ Perhaps the most important because Qualcomm is extremely price competitive. They are used to competing in Smartphone space which has a very different set of margins. Intel will need to face the new reality the good days aren't coming back.
And it has a hardware VVC Decoder! Cant wait to see reviews on it. The problem is Anandtech is gone. I need to figure out which site to go next.
This processor series seems great for thin-and-light laptops and handheld gaming devices, yet Intel has decided to only include Thunderbolt 4 instead of Thunderbolt 5.
This limits the longevity, upgradability, and relevance of gaming products using Lunar Lake.
Yeah. GPU is the only thing you'd want to upgrade in the near future, and Thunderbolt 4 already throttles performance significantly. These CPU results are pretty formidable to go along with a full-fat GPU over 120/40Gbps Thunderbolt 5.
Or just use a laptop and you have a built in UPS and a backup keyboard and display already attached. If you can find one with Thunderbolt you should have the bandwidth for quite a few hard drives.
My guess would be that it won't - Arrow Lake (different codename ~= significant process difference) is coming later this year for desktop and high TDP mobile and the vibe I'm getting from Intel's materials and media coverage is that it will have conventional non-integrated memory. Plus two of the major advantages of moving the memory onboard are more bandwidth to feed the integrated graphics and lower power consumption, neither of which is much of a concern on desktop.
On-package memory isn't faster. High-end laptops will have Arrow Lake with more cores and the same memory performance but it will be on the motherboard. Desktops should also have good memory performance with CUDIMMs.
The performance isn't anything impressive. Power efficiency is probably a real step forward for Intel, but that's to be expected: they finally, for a moment, stopped believing their own lies about their fab capabilities and outsourced the whole thing to TSMC (except for the passive interposer, which Intel is making in-house). But not the latest and greatest TSMC process; no, this is apparently last year's disappointing (by TSMC standards) and expensive N3B process, not the newer N3E.
A few years from now, this will either be an embarrassment Intel tries to hide from the history books (much like how they currently treat Cannonlake), or it'll be looked back upon as a turning point and the beginning of the end of Intel having fabs and chip design in the same company.
That's like 80% of comments in the thread. Unfortunately HN his is not the place for sane discussion. It's a place for people to vent their Intel hate.
HN has changed significantly. Some years ago I quoted The Last Psychatrist and we had a discussion about the points. Lately I quoted TLP and was insulted and downvoted. People are not interested in learning new things, but following whatever cult they believe in.
That's not society issue but a community issue. The issue is solved when the community devolves over time and gets shitty enough to the point that members that care for informative conversations leave for greener pastures to new communities, leaving the dross behind to stay in their cult echo chamber. It happens to every social media platform and it's happening to HN.
I don't know about the Apple ecosystem, but have you seen ANYTHING using the NPU on PC? I have not. I own an AMD laptop with an NPU (Ryzen 9 8945HS) and the NPU has never seen a single percentage of utilization since the laptop was unboxed and put to use. And I actually have an interest in local AI, but all the stuff I use (like Ollama or ComfyUI) run on the GPU, even if they had support for the NPU (I do not think they do) I would not run that stuff on the NPU because it's just not competitive with the nvidia gpu that's also on my laptop.
To me, seeing intel and AMD include this sort of useless thing is anger inducing. I am paying for this. I want every inch of that silicon to be useful. Not detrimental waste of space, like the NPU.
Seeing "better NPU" in a sentence meant to market a CPU doesn't elicit positive emotions.
In the windows world, the one thing that might end up using an NPU is also the thing most people do not want: Windows Recall. And that feature, for now, is exclusive to Qualcomm ARM PCs, current x86-64 NPU owners can't get it.
> seeing intel and AMD include this sort of useless thing is anger inducing. I am paying for this.
So don't pay for it. No one is making you. Wait for a model that doesn't have an NPU, or buy an older model that doesn't. It's not like it won't still be fast enough.
How many years from now? There isn't any high end CPU in laptops without those useless things now.
> buy an older model that doesn't
I don't think you've ever shopped for laptops, or you're lucky and live in a country that is particularly plentiful for choices in PCs. Looking for the specific combination of having 32gb of ram, 1tb of SSD, an AMD CPU (with Intel's current manufacturing woes I was not willing to gamble), an NVIDIA GPU with a minimum of 8 gb of vram took far more efforts than I am normally to spend doing activities like shopping. And now you tell me "do all that while looking for a model that predates NPUs"?
Of course I could order online from god knows where but I like buying from retailers that are known to honor their warranty well and good since there's always the possibility of buying lemons and I don't feel like wasting time shipping crap myself when I could just exchange it in place if it happened.
Entirely up to you. Point is, vote with your money.
> There isn't any high end CPU in laptops without those useless things now.
Even not considering processors from Intel/AMD?
> I don't think you've ever shopped for laptops,
I've purchased 6 in my life.
> Looking for the specific combination of having 32gb of ram, 1tb of SSD, an AMD CPU (with Intel's current manufacturing woes I was not willing to gamble), an NVIDIA GPU with a minimum of 8 gb of vram took far more efforts than I am normally to spend doing activities like shopping.
So be less picky or find a laptop that lets up upgrade the parts.
> I don't feel like wasting time shipping crap myself when I could just exchange it in place if it happened.
Fine, but this is a compromise you are willing to make, just like paying for the NPU. That's my point.
Where are you getting better GPU benchmarks from? Afaik there’s not been public graphics benchmarks and Intel didn’t compare against Apple. Apples GPUs have generally been class leading for integrated graphics. I’d be surprised if Intel improved dramatically here, as their iGPU has been quite anaemic till recently.
Intel’s NPU is better but as noted in a thread higher up their average package wattage is a little over double (37 vs 15W) for a 20% performance claim.
M3 is around 3.5 TeraFLOPS, Lunar Lake is 5.2-6.5 TeraFLOPS. I'm sure more detailed benchmarks will be coming up soon, but realistically, there is no way to make up that gap.
Apple here says the Macbook Pro has 18 TOPS (compared to Lunar Lake's 48 TOPS)...it's not really in the same league.
https://www.apple.com/macbook-pro/
You can’t directly compare teraflops for GPU performance between different architectures unless you really only care about a single precision throughput, which is not a good metric. You do actually need real world graphics benchmarks to compare GPUs.
You also can’t compare NPU TOPs without knowing the baseline data type. Apple for the M3 uses FP16 whereas Intel uses INT8. You have to double the Apple number to get the raw data throughput (ignoring any other efficiencies for operations in different types).
It’s ~36 vs 48. So closer to 33% more for 100% more power use (impossible to measure just the NPU use though). The more comparable SOC for power use would be the M3 Pro
Gaming is essentially all done with FP32...so it's by far the best figure of merit (excluding issues with say RDNA3 dual issue which is rarely achieved in practice).
You will see gaming benchmarks come out soon, and Lunar Lake will be about 50% faster than the M3. (A secondary issue of course is how few latest gen games run on macOS....)
True, that's FP16, but it's not clear if M3's Neural Engine even supports INT8.
I'm sure M4 will make this much more competitive, but right now, Lunar Lake is overall a much more balanced architecture that most people would prefer, ceteris paribus....
Gaming is absolutely not all done with fp32. A lot of games actually target half precision , which is where most PowerVR based GPUs pull ahead. The majority of shaders and buffers are better suited for half.
It also ignores things like occupancy and memory throughput, among many other aspects of a GPU.
I think a 50% delta for GPU is very wishful thinking given even Intel are only claiming a 33% uplift versus meteor lake, which itself was behind the M3 line when compared against similar TDP.
Regarding the NPU, the M3 does support INT8. It’s just that between the M3 and M4 release, the rest of the industry started coalescing on INT8, hence the change in base type.
I expect the same will happen again now that NVIDIA are touting INT4 as their performance standard for marketing.
Intel Arc runs FP16 at 2:1 compared to FP32, and Battlemage on Lunar Lake is the same, and XMX FP16 is actually at 8:1. I don't think M3's GPU has a better ratio.
Of course there are many other aspects, but given it's Intel's latest architecture, which has improved efficiency tremendously (see https://cdrdv2-public.intel.com/824434/2024_Intel_Tech%20Tou... ) it's pretty unlikely M3 has any fundamental advantage.
Do you have any reference showing Neural Engine in M3 supports INT8 (and at 2x FP16? Just curious.)
I’m not saying the M3 has a fundamental advantage. I’m saying that it’s unlikely to be as high a difference as is being stated in real world use. I don’t think a SOC at half the power budget is going to be magically more powerful.
Possibly Battlemage running at 100% will use more power than M3's GPU running at 100%...it will take some detailed testing to track that+ Lunar Lake can be set at different TDP's (plus performance settings, on battery vs connected to power). Not to mention different "100%" GPU workloads.
At the end of the day though, users vastly prefer a more powerful built in GPU for the occasional game session...Intel is willing to pay for the transistors, and Apple reserves them for the M3 Pro instead.
This is an efficient laptop series with new cores capable of low idle and near idle power draw, with an integrated GPU analogous to AMD's 780M (looking forward to benchmarks.)
The AI comments in the PR are due to the package devoting space to a neural processor. They are claiming up to 48 TOPS which exceeds Apple's M4's reported 38 TOPS.
Similarly to Apple Silicon ARM, RAM is incorporated into the package. It has 4 p cores and 4 e cores like Apple's M1.
It's Intel, so looking forward to benchmarks and real world tests, and they will need to have a good desktop version of these packages later this year, but this new architecture appears to be headed in the right direction, finally.