Intel don't market the node itself to consumers but rather the architectural generation. The whole 10 vs 7nm does not generate a perception of Intel being behind but rather they now are behind with the move to TSMC 5nm and the fact Intel still can't get volume production from 10nm for any product line that requires top performance.
The cynic in me feels that this is recent FUD originated in Intel marketing to try and smoke screen the disaster that is Intel 10nm. A little too late imo.
It's not that 10nn is bad but rather how long they've actually been working on it. If it had only taken twice as long to get to volume production they would have been well a ahead of the competition, remember the old tick/tock.
Arguably 10nm was suppose to be here in 2016, but only actually went into production in 2018, hence the the 14nm+++, and it's still not there yet.
Sadly not the case. There are two important, and probably more important constituancies.
First is analysts, market and financial analysts. I have spoken with some really good ones, some of whom came from their respective industries and some of whom didn't but nonetheless really knew their stuff. And I've known some who never really understood but grasped at things that looked scientifish and ran with it. Sadly this latter group outnumber the former group. Analysists cause stock prices to go up and down and often set the consensus that is adopted in the press.
And the second constituency is, at the end of the day, the one that matters: consumers: the people who spend money on the product. And most of them get their info from the consensus opinion as they can't possibly have expertise on everything you need to know (nobody can).
It's stupid that so much rides on this arcane technical question but in this case Intel is reaping the whirlwind it sowed: they put part of their highly technical product line right in front of the consumer in ads and stickers on the products. Now the consensus on the street, fair or not, is that they are behind the times.
Ha, have you spoken with any analysts? They are paid to express an opinion. So they do.
There are good ones and terrible ones. Some people listen only to the good ones and most people don’t know how to tell the difference.
Apple/Nvidia/AMD/etc buy fab capacity, they understand the differences on a node per node across the various providers. They then manufacture a semi conductor product that has many other factors beyond the process node which others buy, such as consumers/data centers/car manufacturer/etc. Knowing the processing node makes little sense in the buying decision of the latter.
The lines for evaluating the end product are even more blurry once we get into chiplet or big.LITTLE design, both of which mix processing nodes to a lesser or greater degree respectively. Once we actually get to products the overall package needs to be evaluated not the processing node.
There is a "fabless" division of Intel that uses their fab to do designs for third parties. You won't hear much about it because a) it's for relatively niche stuff and b) it's historically very uncompetitive in terms of IP offering and schedules. So nobody uses it. Every so often they try and re-invent it, a few years backs they were pitching the Altera aquistion as a benefit but so far it's remained uninteresting. Being able to offer capacity might change things for some prospective customers.
Not that I'm not talking about educated consumers, which will look at other variables, but rather the consumers who will buy a computer and want the cutting edge, without knowing too much details.
Intel have been happily enjoying huge market share on 14nm for the better part of a decade after all.
I really feel this is so relevant now only because Intel's 10nm has become such a black mark. Everyone talks about the problems and that's what might be bleeding through into the consumer conciousness.
With all of the processor flaw mitigations, we saw the Lake series effectively lose all of their gains, however slim, between each generation.
Rocket Lake, the newest, still gets a 2 to 16% performance hit (depending on task) with mitigations on.
At this point, Intel is behind where they started with the Core line.
Unlike a sibling comment, I never knew that the lithography process did not correspond to any physical characteristic in the chip. I naively assumed that the headlines saying that Intel was behind in process technology were correct, and used that to inform purchasing decisions. I found the article enlightening, and I think many on HN with the same background as myself will agree.
As an aside, I tried to read the linked IEEE paper  but the page is cut off for me below section 2. If anyone has a link to the full PDF, I would appreciate that.
Those headlines are correct, and were at the time this article was written as well. Intel's first 10nm process was completely broken, and their current less-dense 10nm process is still barely usable. Any density advantages Intel was planning on their 10nm having over TSMC 7nm are meaningless at this point. Higher density is pointless unless you can get yields that are good enough to ship chips that are meaningfully better on power, performance and price.
Please note that Dennard's scaling has long subsided, gate length reductions do not automatically comes with gains for every metric anymore, notably leakage current. Previous nodes are more power-efficient for some uses, like infrared remotes.
The criticism is that feature size isn't useful for chip designers to choose a fab anymore, so they should be ignoring it too nowadays. (The meta criticism is that they already know it, no need to point it out.)
In plain english, when someone conveys made up information for their gain, we call this lying
Different applications would probably use different scores, based on a table of numbers.
Count of transistors within a single layer (E.G. what's linked above)
Count of transistors within a 3d volume
Performance (flops? some other standard?) of that volume under various thermal conditions, power constraints, etc.
Latency at (the pins and) edges during the common test pattern (during the above profile slots).
Efficiency: how much power is consumed in the above states.
Weight; sometimes it matters, this should be measured and published even if it isn't used in the score.
Not convinced it’s a problem though. Intel seems to be doing fine with their farcical 14+++ naming
Intel vs X is a tiny market. Most CPUs don’t go in x86 machines, and the chip companies choose fabs on so many more variables than just “nm”. Products pick chips on higher level stats than that: price, power, performance, features, footprint, heat, etc.
Consumers usually don’t know or care what CPU is in their device (phones, tablets, watches, doorbells, coffee mugs, etc). They pick for features, fashion, and product ecosystem.
And the cloud/data center market (maybe the last bastion of x86) only cares about price + performance per watt.
I think the only people who pay attention to this stuff are PC gamers, and they’re a niche within a niche. And even that is more of a Ford vs Chevy market of brand loyalty.
Maybe some market analysts use it to ding Intel, but I think that’s a minor detail in a much gloomier picture.
It covers the LMC proposal mentioned in this article, but also another proposal more similar to the current node naming, but instead of using the gate length, using a combination of gate pitch (minimum distance between two transistors) and metal pitch (minimum distance between two wires).
- megapixels for cameras
- MHz for CPU's (back in the days)
- HP for cars