Hacker News new | past | comments | ask | show | jobs | submit login
No More Nanometers – It’s Time for New Node Naming (2020) (eejournal.com)
121 points by bcaa7f3a8bbc 3 months ago | hide | past | favorite | 41 comments



No one beyond chip nerds like myself ever really hear or care about node stepping and we all know it's more of a name than anything meaningful.

Intel don't market the node itself to consumers but rather the architectural generation. The whole 10 vs 7nm does not generate a perception of Intel being behind but rather they now are behind with the move to TSMC 5nm and the fact Intel still can't get volume production from 10nm for any product line that requires top performance.

The cynic in me feels that this is recent FUD originated in Intel marketing to try and smoke screen the disaster that is Intel 10nm. A little too late imo.

It's not that 10nn is bad but rather how long they've actually been working on it. If it had only taken twice as long to get to volume production they would have been well a ahead of the competition, remember the old tick/tock.

Arguably 10nm was suppose to be here in 2016, but only actually went into production in 2018, hence the the 14nm+++, and it's still not there yet.


> No one beyond chip nerds like myself ever really hear or care about node stepping...

Sadly not the case. There are two important, and probably more important constituancies.

First is analysts, market and financial analysts. I have spoken with some really good ones, some of whom came from their respective industries and some of whom didn't but nonetheless really knew their stuff. And I've known some who never really understood but grasped at things that looked scientifish and ran with it. Sadly this latter group outnumber the former group. Analysists cause stock prices to go up and down and often set the consensus that is adopted in the press.

And the second constituency is, at the end of the day, the one that matters: consumers: the people who spend money on the product. And most of them get their info from the consensus opinion as they can't possibly have expertise on everything you need to know (nobody can).

It's stupid that so much rides on this arcane technical question but in this case Intel is reaping the whirlwind it sowed: they put part of their highly technical product line right in front of the consumer in ads and stickers on the products. Now the consensus on the street, fair or not, is that they are behind the times.


True about analysts, but any who don't understand the node naming incongruence between Intel and TSMC/Samsung have no business opining on these companies, and as harsh as it is to say this, investors who blindly listen to them without doing a bit more vetting kind of get what they deserve.


> but any who don't understand ... have no business opining

Ha, have you spoken with any analysts? They are paid to express an opinion. So they do.

There are good ones and terrible ones. Some people listen only to the good ones and most people don’t know how to tell the difference.


If it's FUD then it's been going on for over a decade. Back when Intel was ahead, people would say things like "Intel is first to 32 nm and has a better process than other fabs. So TSMC/GloFo/etc. won't catch up until they hit 28 nm, at which point Intel will hit 22 nm."


But it makes little sense, Intel fabs were not even in competition with TSMC and GloFo, they don't sell their fab capacity.

Apple/Nvidia/AMD/etc buy fab capacity, they understand the differences on a node per node across the various providers. They then manufacture a semi conductor product that has many other factors beyond the process node which others buy, such as consumers/data centers/car manufacturer/etc. Knowing the processing node makes little sense in the buying decision of the latter.

The lines for evaluating the end product are even more blurry once we get into chiplet or big.LITTLE design, both of which mix processing nodes to a lesser or greater degree respectively. Once we actually get to products the overall package needs to be evaluated not the processing node.


> they don't sell their fab capacity.

There is a "fabless" division of Intel that uses their fab to do designs for third parties. You won't hear much about it because a) it's for relatively niche stuff and b) it's historically very uncompetitive in terms of IP offering and schedules. So nobody uses it. Every so often they try and re-invent it, a few years backs they were pitching the Altera aquistion as a benefit but so far it's remained uninteresting. Being able to offer capacity might change things for some prospective customers.


But AMD compete with intel, and consumers might look at nm size when comparing mainstream processors to buy, because at one point it was a good indication of how recent the processor was, and thus how fast / power efficient / ...

Not that I'm not talking about educated consumers, which will look at other variables, but rather the consumers who will buy a computer and want the cutting edge, without knowing too much details.


But they really don't, if you jump on a LTT, gamers nexus, or any tech review it will be full of benchmarks both real and synthetic, the process now might be mentioned but it's hardly the main selling point.

Intel have been happily enjoying huge market share on 14nm for the better part of a decade after all.


These “other” variables do depend on process mode, so TSMC and Intel processes are in competition via the Performance/cost/energy-consumption of the chips they produce.


But the process now does not actually reflect what trade-offs the manufacturing is going to make in the architectural choices.

I really feel this is so relevant now only because Intel's 10nm has become such a black mark. Everyone talks about the problems and that's what might be bleeding through into the consumer conciousness.


As an aside I think Intel have actually done phenomenal work to stretch their 14nm as far as they have. It's taken a significant amount of power draw to do it, but still a significant feat of semi conductor design imo.


Iterative designs for multiple generations with minimal IPC improvements across multiple sockets is "phenomenal work"?

With all of the processor flaw mitigations, we saw the Lake series effectively lose all of their gains, however slim, between each generation.

Rocket Lake, the newest, still gets a 2 to 16% performance hit (depending on task) with mitigations on.

https://www.phoronix.com/scan.php?page=article&item=spectre-...

At this point, Intel is behind where they started with the Core line.


They stretched the app core clocks quite a bit. For sure they had a lot of historical performance lead they had to burn to stay relevant but I don't think it would have been easy to get 14nm as far as they have. It has definitely got very little to give going forward though.


As a computer scientist rather than an EE, I learned a lot from this article. I would like to provide some perspective as a person who buys and looks at processor advances in the consumer market with a technical, but not too technical, background.

Unlike a sibling comment, I never knew that the lithography process did not correspond to any physical characteristic in the chip. I naively assumed that the headlines saying that Intel was behind in process technology were correct, and used that to inform purchasing decisions. I found the article enlightening, and I think many on HN with the same background as myself will agree.

As an aside, I tried to read the linked IEEE paper [1] but the page is cut off for me below section 2. If anyone has a link to the full PDF, I would appreciate that.

[1]: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=906...


> I naively assumed that the headlines saying that Intel was behind in process technology were correct, and used that to inform purchasing decisions.

Those headlines are correct, and were at the time this article was written as well. Intel's first 10nm process was completely broken, and their current less-dense 10nm process is still barely usable. Any density advantages Intel was planning on their 10nm having over TSMC 7nm are meaningless at this point. Higher density is pointless unless you can get yields that are good enough to ship chips that are meaningfully better on power, performance and price.


It's not "barely usable" any more, they're shipping Xeons with up to 40 cores at volume. Afaik everything except high-clock desktop parts is shipping on 10nm now. It took them an eternity, but it's not Duke Nukem late.


Those Xeons only make sense if you're able to make very good use of AVX-512 or if you're bureaucratically prevented from buying AMD. They aren't a disaster to the extent that Cannonlake was, but it's still a product that has trouble standing on its merits and is shipping in part because Intel couldn't have cancelled it and told customers to wait for Sapphire Rapids without facing a shareholder lawsuit for lying about the viability of their roadmap.


You're moving the goalposts. After being delayed numerous times and the first parts being of rather questionable usefulness, Intel's 10nm process is doing fine now and shipping in volume. Noone claimed that they had closed the gap to AMD with the 10nm Xeons, only that they're shipping. (They also significantly narrowed the gap, but it remains quite large).


For a consumer it is better to look for benchmarks in the specific field that interests them anyways. The numbers at this moment show that AMD is kicking up intel's ass in both performance-per-watt and price. The only thing intel has still going is the single core performance–and there, Apple has leapt over everybody.


https://web.archive.org/web/20201113074058if_/https://ieeexp...

Please note that Dennard's scaling has long subsided, gate length reductions do not automatically comes with gains for every metric anymore, notably leakage current. Previous nodes are more power-efficient for some uses, like infrared remotes.


Those number are there to inform the purchase of chip designers looking for a fab. If you are buying complete chips, you have much more relevant numbers to look at. At this abstraction level, feature size is at best irrelevant, and at worst, misleading.

The criticism is that feature size isn't useful for chip designers to choose a fab anymore, so they should be ignoring it too nowadays. (The meta criticism is that they already know it, no need to point it out.)


"Unlike a sibling comment, I never knew that the lithography process did not correspond to any physical characteristic in the chip."

In plain english, when someone conveys made up information for their gain, we call this lying



That's a good start.

Different applications would probably use different scores, based on a table of numbers.

+++

Count of transistors within a single layer (E.G. what's linked above)

Count of transistors within a 3d volume

Performance (flops? some other standard?) of that volume under various thermal conditions, power constraints, etc.

Latency at (the pins and) edges during the common test pattern (during the above profile slots).

Efficiency: how much power is consumed in the above states.

Weight; sometimes it matters, this should be measured and published even if it isn't used in the score.


I know very little about this, but shouldn't the ideal unit consider volume? They're going to eventually do clever 3d stuff I'm guessing.


CPUs are starting to move toward 2.5D and 3D packaging, but for the foreseeable future thermal limitations will prevent a fully 3D processor from being viable. Transistors per unit volume doesn't really have advantages over transistors per unit area when we're at most looking at stacking some cache on top of the CPU logic (and in the near future, we're really just putting chiplets alongside each other and providing interconnections with similar performance to communication across a large monolithic die).


You're totally right that heat is the enemy to 3D designs. They will only work with low power, or if nanoscale heat pipes can be integrated into the design.


This should be upvoted to the top of the discussion. The article never makes a suggestion for what should replace nanometers as the node name. This is a great suggestion: million transisters per square millimeter.


Aka the numbers are made up.

Not convinced it’s a problem though. Intel seems to be doing fine with their farcical 14+++ naming


Kind of the same as clock speed, though Intel was on the other side of the coin a decade or two ago, pushing the idea that clock speed is the only performance measure that matters (until it changed it's mind with the Core architecture).


This whole thing seems like a wrong take. I don’t think TMSC even registers Intel as competition these days.

Intel vs X is a tiny market. Most CPUs don’t go in x86 machines, and the chip companies choose fabs on so many more variables than just “nm”. Products pick chips on higher level stats than that: price, power, performance, features, footprint, heat, etc.

Consumers usually don’t know or care what CPU is in their device (phones, tablets, watches, doorbells, coffee mugs, etc). They pick for features, fashion, and product ecosystem.

And the cloud/data center market (maybe the last bastion of x86) only cares about price + performance per watt.

I think the only people who pay attention to this stuff are PC gamers, and they’re a niche within a niche. And even that is more of a Ford vs Chevy market of brand loyalty.

Maybe some market analysts use it to ding Intel, but I think that’s a minor detail in a much gloomier picture.


In my view, there might be uses for different kinds of units, depending on what you care about. Nanometers are interesting to me because they relate to progress in lithography technology. Nodes or gates per acre tells us something. Achievable information storage or bandwidth per acre tells us something else.


Perhaps we should migrate to a "6502s per square milimeter" metric or something?


No kidding, I think from some Larabee conversations that we're already at > 1 Pentium core per mm2 (the original P5)


Related article (that was previously discussed on HN): https://spectrum.ieee.org/semiconductors/devices/a-better-wa...

It covers the LMC proposal mentioned in this article, but also another proposal more similar to the current node naming, but instead of using the gate length, using a combination of gate pitch (minimum distance between two transistors) and metal pitch (minimum distance between two wires).


I do not understand why would anyone care about nanometers at all. There are power efficiency, performance, reliability, price/value characteristics that are more important to an average consumer than the lithography process size.


Because the node size used to be a reliable proxy for power efficiency, performance and price characteristics. That was true for decades, until the node size became so small that the quantum effects changed the game.


Because people like simple numbers.

Examples:

    - megapixels for cameras

    - MHz for CPU's (back in the days)

    - HP for cars

etc...


So nanometres are the new megahertz.


(2020)




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: