Hacker News new | past | comments | ask | show | jobs | submit login
Intel's ambitious Meteor Lake iGPU (chipsandcheese.com)
145 points by ingve 40 days ago | hide | past | favorite | 73 comments



Intel Core Ultra 7 155H can be configured to 20-65W TDP. AMD Ryzen Z1 Extreme can be configured to 9-30W. Similar performance level at twice the power consumption doesn’t look particularly impressive.

Which is sad because things tend to stagnate without competition. Look at the current GPU landscape dominated by nVidia, or the CPU market during the decade before AMD Ryzen was released. Unless Intel manages to deliver something good, we gonna see similar stagnation with CPUs and APUs, with the market dominated by AMD products.


Qualcom and Apple is pushing forward so probably not a big issue. I'd rather say we're in a very interesting situation for CPUs right now (compared with 5-10 years ago).

GPUs on the other hand seems much more dominated by Nvidia. It seems though there are some competitors (AMD aside) lurking in the background though (again, Qualcomm and Apple).

Interesting that the new contenders seem to have a very typical disruptive innovation approach - moving up from the lower end up to more higher end products (with one specific edge; power efficiency). Even more interesting, this concept (coined/popularized by Clayton Christensen) stems from research in the hard drive industry! Full circle!


I haven't been following closely enough to know what the obstacles are, but I wonder why some of these vendors don't experiment with wider system memory buses to increase the bandwidth along with these iGPUs gaining more interesting cache and IO topologies.

My memory is that we used to see more variation with single, dual, triple, and quad channel memory configurations in essentially prosumer/pro desktops. But lately, it seems like everyone has just stuck to dual channel and minor variations in memory timing of the same DDR generations.


iGPUs like the ones in PHX/MTL have to go into handhelds and ultrabooks, so they're going to be power and thermally limited in before 2-4 MB of cache + LPDDR5 becomes a major bottleneck.

Now if you can give the iGPU a 80W power budget instead of maybe 15W, that's a different story. But at that point you're competing with discrete GPUs that can show up with GDDR6, and maybe an iGPU doesn't make so much sense anymore.


Does the fact that the new GPU "chips" from Apple et al are integrated vs expansion PCIe cards give them more room for faster advancement from Nvidia, or just the fact that they started with a very mature GPU market as guides the biggest boost? If integration is key, will Nvidia reach a point where they get beat because they are external to the CPU?


Maybe, but is the interface a significant factor here? From what I understand one of the biggest advantages of Nvidia among GPU companies is their strong software part (eg CUDA). That speaks to that Apple has a different kind of integration advantage (they probably need a much smaller software base because they control a lot of the other side of it).

I must say I don't see anyone outcompete Nvidia anytime shortly though. I just don't see anyone having any significant edge. But, I didn't see Apple coming either, despite iPhone showing clear signs of some very exciting performance increases year after year. I guess I didn't think Apple was interested/thought it was worth it. After all, their market/value (imo) stems not so much from a CPU advantage (very clearly it didn't just a few years ago laptop wise anyway). And it's risky - it could end up as it did before with PowerVR.


If Nvidia gets to a point where they have to onboard their GPU to keep scaling, I don't see why they couldn't do it. They design their own SOC packages now, and have been integrating Nvidia IP into third-party chips for well over a decade. Once the need arises for "Nvidia Silicon" beyond what we see in Tegra and Grace, I wager we'll see it.

From a cynical perspective, Apple's focus on integrated graphics kinda kills any hope they had of scaling to Nvidia's datacenter market. I mean, just look at stuff like this: https://www.nvidia.com/en-us/data-center/gb200-nvl72/

  GB200 NVL72 connects 36 Grace CPUs and 72 Blackwell GPUs in a rack-scale design.
Tightly integrated packages do work wonders for low-power edge solutions, but it simply doesn't enable the sort of scaling Nvidia has pulled off as fast as they've done it. CUDA is certainly one aspect to it, but Apple had the opportunity to make their own CUDA competitor with OpenCL through Khronos. The real reason Apple can't beat Nvidia is because Apple cannot write an attractive datacenter contract to save their life. Nvidia has been doing it for years, and is responding to market demand faster than Apple can realize their mistakes.


TDP numbers aren't benchmarks, and involve lots of non-GPU hardware. It's fair to argue AMD has a power advantage still (though I'd want to see more targeted measurements). But to argue for a factor of two based on package thermal parameters is silly.


Intel has never shipped a chip where the iGPU can come close to using the full TDP. I'm not sure the Meteor Lake iGPU can even reach half of the TDP. In actual gaming the CPU cores will be pulling more power than the iGPU as often as not.


Intel changed their TDP on more recent chips to be more reflective of reality.

Actual peak power for the AMD chip will be at least 2x higher or 18-60w which isn’t that different (and I believe it can actually hit even higher than that at the highest configuration).

The question isn’t peak power, but how much performance you get for that power.


> Actual peak power for the AMD chip will be at least 2x higher or 18-60w which isn’t that different

Agree on the wattage, but it’s still 2x difference. Intel says maximum turbo power for that chip is 115W, here’s the specs https://ark.intel.com/content/www/us/en/ark/products/236847/...


Foolishly question : why is there not xilinx attack? This seems to be the hour of programmable specialized computing.


How do you pronounce Xe? I feel like that might just be a huge psychological barrier for most people.


Apparently the official pronunciation is "ex ee" (not "zee").



Those were the same video... Whoops!

Intel Keynote: https://www.youtube.com/watch?v=-kWiRrf2o6Q&t=570s


So it's like gif, I'll keep saying zee.


Honestly nobody cares. It's not even a good GPU.


I honestly felt kind of embarrassed asking (especially here). But also that fact means something here.


like *.exe


X is ten, so tenny. Like a Ver, but bigger.


Yanny


i pronounce it in my head (wrongly) like xi/shi


Zee

Zeh

Zay

Chee

Cheh

Chay

Hee

Heh

Hay

Ecks-uh

Ecks-Eee (really strange that this is the "real" one considering it doesn't resemble an acronym)

Ecks-ay



Unfortunately a weird train wreck of wrong and strange statements.


It's a technical article about an esoteric topic, you don't usually get good discussion.

Sometimes it's good for a laugh. Like I remember a commenter in one of the XZ posts saying that open source contributors should be required to have US security clearance.

The top comment talking about Vista (2007), Crisis (last game 2011), Witcher (Last game 2015) and the Datsun name change (1986) is really something.


Yep, or the fact that such a commenter has 70k karma on HN is something, too.


I looked and he’s one of those people who submit 2 posts per hour. And like, I wouldn’t want to do that, but if you’re a news junkie who feeds articles into HN soon after they come out and gives us stuff to talk about, it makes sense that you’d have huge karma in the long term since you’re providing a service to the HN community.


Two posts per hour and deep comments that turn into word salad under close examination? Could be a gpt-bot as well


I have a suspicion that the reason a lot of LLM bots can output stuff akin to word salad is because a lot of their training data is word salad in the first place. If you're a competent reader pre-LLM, you can mostly avoid/discount word salad, so its prevalence doesn't really register.


My YOShInOn can't write text, at least not yet. (I am thinking about T5 to attach Mastodon tags) I think most generative models would be too polite to say something like that or would be more tentative.


daniel-cussen has entered the chat


[flagged]


Why are you talking about "YOShInOn" like people know what you're talking about?


It's a better marketing tactic than posting a link to a sign up form for a product that doesn't exist yet. When I do get that blog post done I know there will be pent up interest. For now see

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Not just that but the wrong and off topic statements also get up voted to the top. Maddening.


I've found r/hardware to often have good discussion. As a bonus, a fair number of ex-Anandtech journalists frequent the sub too and often chime in.


/r/hardware is pretty bad about anything that isn’t gaming oriented products and even then it’s a crapshoot.


I don't know if I'd agree. Threads that have a decent number of comments (10+, ideally 50+) often have people correcting each other or clarifying, for eg in topics like foundries/process nodes. There's a recent thread on the new iPhone using 2nm and the discussion there too is about how it's unexpected etc. This might be "basic" or simple for folks in the industry, but for the average tech nerd like me it certainly seems good quality enough.

[0] - https://old.reddit.com/r/hardware/comments/1c0rdhi/2nm_chip_...


Feel free to correct and clarify those statements.


What, you think that's a worthwhile endeavour? Getting people to change their mind in this site is far worse and a far more toxic experience than reddit.


Well, the article is a bit of a mess.

... so, what exactly makes it ambitious?

The article itself brands AMD as afterthoughts in "GPU" (note: virtually no references to actual graphics, I get it, AI pushes all the investment these days), and then plops down performance graphs that generally show AMD to be better.

It talks about "compromised" gaming as the stated bar in the opening paragraph.

I understand that Intel has shown near-complete managerial incompetence in discrete graphics despite trying every five years or so. So maybe "not totally failing" is "ambitious".

Maybe at some point, GPUs are "good enough", once you can 4k 120FPS enough aa textured triangles, it's more about art design, modelling, etc (something I actually think AI will be really beneficial for)

Alas the tantalizing prospect of real time raytracing seems to be backburnered under the AI hype onslaught.

So maybe the "lesser" iGPU will be a new sweet spot for a couple generations. Maybe having all the high end cards swooped up for LLM training will have the PC gaming market take a breather and work on optimizing for a somewhat stable hardware capability.


(author here) I appreciate the feedback, but I have trouble understanding where you're coming from.

"what exactly makes it ambitious?" I thought I outlined that it was much more powerful than Intel's prior (RPL) iGPUs both in the first paragraph, and in the conclusion. It competes with the powerful iGPUs AMD has been getting into handhelds.

"AMD as afterthoughts" - Can you explain how you got that impression? I opened by noting how AMD's APUs are extremely competitive if not downright dominant in handhelds (Steam Deck and ROG Ally called out as specific examples) and threatens Intel in the laptop scene too.

"4K 120 FPS" - uh no, you're not getting that on an iGPU unless it's a game from 15 years ago. I suggest checking the very wide variety of other reviewers who run game benchmarks on devices like the Steam Deck or ROG Ally. 1080P or 720P 30 FPS is a good target, and you might need medium or low graphics to get there. That's what I mean by compromised gaming. It's not the same experience as say, gaming on a desktop with a midrange discrete card.

"lesser" iGPUs imo aren't a new sweet spot, the sweet spot is just holding on to older cards that still deliver better performance than these iGPUs. For example check Steam's hardware survey (https://store.steampowered.com/hwsurvey/videocard/). There are more people with a GTX 1080 than a RTX 4080. And PC games are optimizing for stable hardware capability. The latest games are usually playable on Pascal.


Basically, NVidia is the bar. Ambitious to me implies at least challenging NVidia. AMD makes competitive GPUs in certain arenas (and for Linux is the main choice due to drivers), but for me no definition of "ambitious" involves even thinking about AMD.

Well, 4k 120 from an iGPU ... you're saying THAT is ambitious? There's the bar!

Historically Intel has about every 5 years started to rumble about getting serious in the discrete markets, and they make some marketing fluff, but nothing even remotely competitive outside the iGPU "meh" range ever comes out.

So if I hear Intel being "ambitious" and then read an article that basically pretends (I'm not accusing you of anything) NVidia doesn't exist, well, seems like a failed premise to me.

I'm pretty negative on Intel over the last decade, you'd think I was a spurned contractor (I'm not, never worked there). Intel is definitely in the "prove it" mode. They've so massively failed/squandered opportunity at smartphone chips, SSDs, memory, graphics, and then finally screwed the pooch in process tech and CPUs. So clearly an engineering company that was hijacked by finance MBAs are driven into the ground, and it is HARD for companies to come back from that poison, especially when they had about 30 years of near-unchallenged monopoly dominance in the marketplace.

I didn't want to imply "author sux Lol" the article was pretty in-depth and information depth, but it remains the basic premise is flawed, because the source marketing/press release by Intel is about an 80% chance of being BS or "same story, different half decade".


4K 120..."There's the bar!" - By ambitious I meant Intel's serious about getting competitive gaming performance in the handheld or thin/light laptop category. MTL's iGPU is ambitious compared to older standard Intel iGPUs like the HD 530.

"Nvidia doesn't exist" They don't exist in the iGPU market, unless you count the Nintendo Switch. The Switch doesn't run the same games that Meteor Lake and Phoenix do, and therefore I don't think it's an interesting comparison. But I do have data at https://chipsandcheese.com/2023/12/23/nintendo-switchs-igpu-... if you want to factor in Nvidia. Same with Nvidia's discrete cards or AMD's desktop RDNA 3 variant (with the larger 192 KB vector register file). Neither of those can fit in the same form factors and power envelopes that Meteor Lake and Phoenix compete in.

What Intel source marketing/press release stuff did you take issue with? I'll be honest, I didn't go over their Meteor Lake marketing/press release materials in detail. But if they did claim something crazy and didn't deliver, I can understand the disappointment.


https://wccftech.com/gpu-market-rebounds-q2-2023-amd-nvidia-...

> The integrated segment had a total of 48.82 million units shipped worldwide followed by the high-end GPU segment which saw 6.84 million GPU shipments, 2.59 million shipments in the mid-range category, and 1.81 million in the entry-level segment. Workstation GPUs also shipped 1.50 million units.

Integrated graphics may not have the profit margins of dedicated, but in sheer quantity, they dwarf add-in boards.


But how many of those 48.82 units are actually used in any meaningful capacity?

Intel is spending a great deal of money manufacturing stuff that’s then utterly wasted on any system with dedicated GPU’s or more commonly never use more than a small fraction of that 3D capacity. It just seems like a multi billion dollar waste from a company that’s so used to being a near monopoly it can’t step back from the iGPU trap.


>Historically Intel has about every 5 years started to rumble about getting serious in the discrete markets, and they make some marketing fluff, but nothing even remotely competitive outside the iGPU "meh" range ever comes out.

I think the key thing you may be missing here is Intel Arc which is Intel's first real dGPU. And now they are using that tech in their iGPUs.


> "4K 120 FPS" - uh no, you're not getting that on an iGPU

You are not going to get that from many dGPUs either. You might get that from the high end in the last or previous gen, but not from the midrange or 2-gen-old models.


It really depends on the game.

I have a 4090 (for work, I swear…). Cyberpunk is smooth, but I don’t get 4k 120fps. But I also play a lot of little indie games. Terraria? Stardew valley? Slay the spire? This stuff doesn’t need a dgpu at all. And I suspect a significant percentage of global gaming hours go into stuff like this now. Games that really push the hardware are expensive. (Well, or badly made). Either way, it’s usually bad for business.


I took the ambitious part to be changing how the gpu was connected to the rest of the system. Seems like a decent jump although not exactly risky.

But the conclusion sums it up well:

> Meteor Lake abandons the familiar Sandy Bridge client design that characterized Intel’s consumer CPU offerings for more than a decade. In its place, we get a flexible chiplet strategy that lets Intel pile on accelerators in response to market demands.


So they're basically following AMD's lead by going to a modular chiplet based package...


I don't think it's really about followimg AMD. It's the only sensible thing to do given the state of scaling and economics.

All interesting things Intel are doing are chiplet based. I think it's partly about showcasing their packaging tech in an attempt to differentiate their fab offering. You could argue there's nothing very special in that respect here, they've done this process and vendor mixing before elsewhere. Except this will be a high volume product line so they need good yield and capacity. But that's just my read on it, maybe it's just the cheapest way to stay relevent.


I simply meant it was ambitious compared to prior Intel iGPUs, especially stuff like Skylake GT2 where you could be playing at 720P low and still not get 30 FPS.

The chiplet strategy is kind of ambitious too because there's power overhead and they're targeting battery powered devices with Meteor Lake.

Eh, writing is hard. I never liked English class anyway


Re the power overhead. They probably get some power back by using smaller nodes on some chiplets than they'd otherwise be able to afford if doing the whole design with the same process. And they are betting on the benefits of backside-power delivery.


Don't let the haters get you down, I love your articles.


Oh I don't mind the discussion here at all, I'm just occasionally puzzled at things I thought I was pretty direct about.

Honestly though I don't like writing. Finding stuff out about hardware is fun. Weaving it into a coherent article is a chore.


I think most people don't appreciate how hard it is for hardware companies with established products to make non-trivial architectural changes. It's risky and it usually snowballs and becomes more risky. Especially when others in the market are already doing similar things, it downplays the change.

I love researching things too but I can't write coherently. I really enjoyed your article, thank you.


If you're not gaming or doing serious gpgpu stuff, the on-board GPU has been a sweet spot for well over a decade now.


> Alas the tantalizing prospect of real time raytracing seems to be backburnered under the AI hype onslaught.

Can you elaborate on this? Is that because ray tracing needs only simple vector units, while AI is driving tensor-type architectures?


Not OP, but to provide some historical perspective, RTX hardware raytracing is very firmly a gimmick and it isn't AI nonsense that's going to be the end of it. It's going to go the way of PhysX, 3D Vision, and EAX audio. Cool, but complicated and not worth the effort to game devs. Game designers have to make all the lighting twice to fully implement RT, and it's just not worth the effort to them.

Nvidia's own site[1] lists a total of 8 Full RT compatible games, half of which they themselves helped port. There are far more games that "use" it, but only in additional to traditional lighting at the same time to minimize dev costs. Based on that and past trends, I would personally predict it to be dropped after a generation or two unless they can reuse the RT cores for something else and keep it around as a vestigial feature.

[1] https://www.nvidia.com/en-us/geforce/news/nvidia-rtx-games-e...


"Full RT" means the game uses 100% raytracing for rendering (in some mode), which currently needs still far too much power to be a mainstream thing and is only added in a few games to show the prowess of the engine (IIRC a review of the Cyberpunk 2077 Full RT mode only a 4090 is really able to provide the power needed). The important entry is "yes", which shows far more entries and means there's Raytracing enhancements in addition to rasterization.

So, no, it's quite the opposite of what you stated: RT gets more important all the time, is not a gimmick and there's zero reason to assume it will be dropped in the future.


It is a gimmick in that you have to sacrifice far too much performance. An RTX 4080 will need to run at 1080+upscaling+framegen to get above 60 (!) FPS with ray tracing.

No thank you, I’ll take buttery smooth real 120 FPS at 4K. Especially because games have gotten so good at faking good lighting.

Maybe with the RTX 6xxx series it’ll be viable.


It does look fabulous though. I have a 4090 and absolutely turn RT on for cyberpunk. Even with a 4090 I use upscaling for a good frame rate. But the resulting image quality is just spectacular. That game is really beautiful.


No contest there, it looks really great! But for me, not enough to go back to “choppy” gameplay.


Personally, I'd rather vote 120fps and 4k as gimmick. If I had to choose between Raytracing or 120fps? Always Raytracing.


You could argue 4K as a gimmick if you’re sitting at TV distances, but the difference between 60 and 120 FPS is extremely jarring. Try playing at 120 and then mid-session capping it at 60.


I would hardly say it's a gimmick. Now that frameworks like epic's unreal engine and others implemented for the developer. I don't see these technologies going away. One can hope that nvidia's dominance lessons overtime.

I believe the next big thing is generative AI for NPCs as soon as the models are optimized in the hardware for the average GPU. Let's see what the next generation of Intel AMD and arm produce. Windows branding of an AI is going to make this possible. It's going to take years though for the market to be saturated with hardware capable for developers to pay attention.


You do realize that RT greatly simplifies the job artists and engineers have to do to make a scene look well lit? The only reason it's done twice currently is because GPUs aren't powerful enough yet. RT will simplify game production.


Gov needs/invest in Intel chips and they will deliver paradigm shift soon.


Justification for the assertion?


.


Waiting for Strix Halo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: