Hacker News new | comments | show | ask | jobs | submit login
Intel's First 10nm Cannon Lake CPU Confirmed (forbes.com)
76 points by john58 7 months ago | hide | past | web | favorite | 26 comments

> Interestingly, there doesn't appear to be evidence that the CPU includes an integrated graphics processor - something that's listed in detail on the product page of the Coffee Lake-based Core i3-8109U. This could simply be down to the fact that Intel does not wish to reveal details of the onboard graphics of Cannon Lake CPUs at this time, or it's decided to cut the feature out of certain product ranges in favor of using separate graphics cards.

I wonder if they may have cut onboard graphics to increase yield by lowering the bar for a QA pass in the lower product ranges. If they are having process difficulties at 10nm, that would allow them to push some of what would normally be rejected to market.

This seems like the most likely case.

If the GPU part has a failure, you can disable it and rescue the CPU. If the CPU has a failure you might rescue GPU. Pairing failed GPU-CPU's together for cheaper products and selling fully functional ones for the top of the line products makes sense.

The same happens with multi core ships and even with failed caches. Just disable them and sell them as low end chips.

Or they knew that they were going to have yield issues, and these dies don't even have a GPU in the first place rather than binning them after the fact.

The solution they pick is typical linear programming problem using die size, yield and price of different options after disabling failed areas.

They also layout with some understanding of the yield (yield is almost entirely a function of die area at a given node). So if they decide to just entirely bin off the GPU, and they know this ahead of time enough, it makes more sense to not have a GPU in the first place. The GPU is about half the die area, so you would get better yield by just not having it at all (but all of this depends on having enough lead time to decide this before layout).

any links to such an LP problem? I am curious to see how they formulate the problem. thanks in advance.

Or NAND flash memory/drives, they chop it up!

That makes a lot of sense. I was confused as to why they'd release a version that's both targeted towards low power (which makes sense for a new node), but doesn't have an iGPU.

Dumb question: would such a thing make sense for blade servers or something like that, where you want low watts/MIPS but don't care about GPUs?

Am I missing something, or did they just update the page? The intel page specifies a Intel® Iris™ Plus Graphics 655 as the GPU, from what I can see.

Almost certainly, their cash cow server market is still stuck on 14nm for next year.

There's a tiny bit more information in the recent short article about this CPU at WikiChip.[1]

[1] https://fuse.wikichip.org/news/1285/intel-launches-cannon-la...

"confirmed" seems a little strong for them pointing to a single article in German citing an un-sourced ad in Chinese...

Check out ark.intel.com for confirmation.


I did not expect to see AVX-512 support for a Core i3.

They need to drive up adoption somehow, right?

Well, and it's now up on ark.intel.com

This is a dual-core 2.2GHz processor at a TDP of 15 Watts, how does it compare to modern ARM mobile processors?

Looks like a Snapdragon 845 is ~2W for the CPU and ~5W for CPU+GPU: https://www.anandtech.com/show/12520/the-galaxy-s9-review/4 https://www.anandtech.com/show/12520/the-galaxy-s9-review/6

Tegras can get into the 10-15W range which I guess explains why they're now considered automotive processors instead of mobile.

The highest ARM TDP available now is 2w from the Cortex-A75. That's 7.5x as efficient.

The current Intel 8109u scores about 2x the IPC of the A75 (the main core in the Snapdragon 845). So assuming a generous increase of 15%, the same story as the last 5ish years. ARM is much more efficient; but Intel has more total processing power (especially in the higher power envelopes).

That's 1/7.5x the TDP, not 7.5x as efficient. Efficiency is measured in units like requests served per joule. ARM processors typically use less power but also do less work per unit time.

Intel's aim at ARM market ended when they killed Atom and had to layoff 11 percent of its workforce working on Atom. Doesn't help that most Atom chips had such a huge flaw that it scared Intel out of there wits and forced them establish a cash reserve to deal with the blunder.

I was under the impression that i3 were the weaker dies, where for any reason there were some defect on some part if the cache they would disable it and sell it as i3. I don't know if this is true or not, but in any case is there anything to read in the fact that i3 would be released before i5 and i7?

i3s also have the lowest TDP, so the smaller node may make the most tlsense there first. Particularly if you're not getting the yields necessary to target servers quite yet.

Easier to produce lots of i3s because of high failure rate?

That was my thinking given the yield problems they had with this process but I am not qualified.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact