In order for this to work, we need to understand why the US and Japan are behind Taiwan in this technology in the first place. For TSMC, as a foundry, their value is maximized by their ability to fabricate devices that semiconductor design companies cannot fabricate themselves.
Intel and Toshiba apparently don't see cutting edge device scale as a differentiator and so far haven't invested (enough) in matching TSMC in that arena.
There is very little chance, IMHO, that establishing and funding an independent organization to pursue 2nm will yield the desired results. Providing funding tranched on a results basis to existing firms stands a much better chance, provided that they are permitted to pursue 2 nm fabrication without sharing what they've learned, so they gain the benefit of the effort. If that were the approach, I don't see why the US and Japan need to collaborate at all.
Seems more like narrative-supporting publicity than anything else. Ugh.
> Seems more like narrative-supporting publicity than anything else.
Agreed. 2nm-scale manufacturing is contingent on the capabilities available from a complex, interdependent ecosystem of suppliers starting with ASML but continuing to dozens of upstream and downstream sources of essential enabling tech from optics to light sources to resists, etc. Trying to push this from a top-down, government-driven "coordinating" agency will likely fail to accomplish anything meaningful toward the stated goal.
The relevant ecosystem players are already closely collaborating, coordinating or competing. There's no obvious lack of motivation, common ground or communication. And this government effort doesn't have nearly enough money to entirely self-fund things that are both A) likely to make a meaningful difference, and B) aren't already being worked on . Thus, they'll have lots of meetings, then make some bets they can afford funding lower-odds things which aren't likely to pan out (or they'd be the higher-odds things already being bet on by the ecosystem).
> Trying to push this from a top-down, government-driven "coordinating" agency will likely fail to accomplish anything meaningful toward the stated goal.
But isn't this exactly what China did with Shenzhen and is attempting to do with their semi-conductors as a whole, which is the whole impetus for what the US is doing?
How much are those failures attributable to issues with the policy and how much are they attributable to reactions from opponents in Taiwan, South Korea and America?
How much their being worth protecting is causally protecting them? I suspect we have the same concern - that Taiwan is more likely to be subsumed if their products can be sourced elsewhere - but I don’t know if this is valid.
I think "One China" is about a lot of things beyond commerce.
States do have an interest in protecting their trading partners, insofar as it's still a good deal. And the poorer the country, the lower the fruit for a hungry invader.
If Intel and others could even get 7nm quality chips working that would likely be good enough for the foreseeable future.
I really want to see a proper head to head of ARM vs x86 with chips at the same fabrication scale - right now part of the reason ARM (and apples new CPU's) are leagues ahead is also just that they are on a different manufacturing process. A modern AMD processor on identical scales might actually be (somewhat) close to the power efficiency of ARM. I'm sure it would still lose but it would be interesting.
Those companies sold American manufacturing overseas and are now demanding money to bring it back. I like the idea of an independent organization, though I'm not sure it will succeed.
Just so everyone is aware, terms like "2 nm" are industry jargon used primarily for marketing and roadmap comparisons of process nodes. It doesn't really have any physical bearing on the sizes of the devices involved anymore.
I hear this statement quite often, but never paired with what numbers we should be using. Are there any? Do we know anything about these fabs that is worthwhile to make a comparison?
Like any sufficiently complex thing, it takes a lot of numbers to create a representative profile.
One somewhat useful figure you’ll see used is MTr/mm^2 or mega-transistor per square millimeter. That number can change depending on what exactly those transistors are being used for (which cell libraries are used).
IIRC Intel proposed a standard ratio for such measurements a while back.
Density doesn’t make a process better for everything however.
Useful measurements are particularly difficult when trade-offs are inherent options.
For memory: space, speed, power required, longevity (against time, against temperature / ambient energy).
Computation usually requires some tiny bits of very volatile memory (registers, cache, invisible buffers, etc), but similarly must have tradeoffs for space, speed, power, and longevity.
Radiation Hardened / hostile environment design libraries can also be useful for automotive, heavy industry, military, and space hardened applications. I'm not sure of the details but guess they'd frequently use more space, power, and possibly even active shielding (E.G. additional layers of conductors and/or capacitors sandwiching the logic bits to try to protect them and stabilize local currents).
Up until 45nm the name of a process node was twice the length of a transistor's gate. At that point it stopped being technologically feasible to reduce the gate length but companies were able to find other ways, such as fin FETs, to pack ever more transistors in with each node without having to reduce the transistor length.
This is not correct. First, process node names were the same as the minimum transistor gate length, not 2X. Second, at least down to 28nm, this relationship held true. It may have also been the case for 22nm. It all went out the window with finfet processes.
When they changed from planar transistors to more 3D-like transistors like FinFET, I believe. I'm pretty sure 55nm and probably 28nm still has minimum pitch of that size.
Problem was that people had certain expectation that a "45nm" process could fit more transistors than "90nm" one. Would be kind of awkward to name it like "22nm++++" and you have no idea roughly how much better it was than the old planar "22nm" process.
I don't think it was any one time, just a slow drift away from Moore's law physically while the marketing department followed it strictly (as well as defined the nodes before the processes were actually developed, so they were going off best guess).
What is Moore's law up to these days? I feel like that, as a user, nothing got faster for about a decade between 2010 and 2020 (not quite right, more like 2012-2018). Instead everyone was like "oh it requires less air conditioners in a data center!" or "uh we're out of ideas, have 32 cores". OK, but I need more FPS from my game. I have a great air conditioner.
That seems to have changed in the last couple years, though. AMD and Apple seem to have gotten serious.
(I know, Moore's law is about transistor count, and I guess adding more cores technically increases the transistor count. But as a software engineer, I need my chips to get 2x faster every 18 months, or I'll have to start using a profiler or something!)
It used to be that when you shrank a transitor you would be able increase clock speeds by default thanks to something called Dennard scaling. That stopped happening in the mid 2000s. Price per transistor, transistor density, and energy efficiency are all continuing to get better exponentially but new semiconductor plants are also going up in price exponentially so more and more companies are dropping out of the race. You can look at the Wikichip page to see how much attrition there's been.
I upgraded from Ivy Bridge to a Ryzen 2700, the difference is extreme. One of those could barely encode a 1080p stream in real time the other one can encode at least eight and probably 16 but I never needed that many.
With finFETs the transistors are not physically getting smaller anymore. Instead, by making taller and taller finFETs, you can squeeze the transistors closer without actually having smaller transistors. By some definitions, you can say Moore’s Law is already over, or at least no long operates via the traditional methods.
From my experience, no offense, when you pool together a bunch of also-ran teams together to catch up to the industry leader, you'd just get a larger also-ran team.
I also have reservations on whether the academia can surpass the speed of commercial R&D teams on semiconductor tech. I will believe it when I see it.
Also rans? Are you implying that thr U.S. and Japan are "behind" Taiwan? If so, I don't think that is very accurate. The machines that allow Taiwanese companies to create computer circuits come from a Dutch company ASML (with the second most advanced machines coming from Nikon Japan).
ASML's machines have over 4,000 parts in their own supply chains. Many of these are high technology which even ASML does not know how to produce. Many of them come from the U.S. and Europe. The EUV process ASML uses in its most advanced machines was only possible due to investments made by the U.S. government and U.S. companies.
What is happening here is not an attempt by the U.S. and Japanese governments to compete with Taiwan. It's an investment to ensure the next generation of lithographic technology plays out much like the current generation. Meaning the U.S. it's allied governments and many of their respective companies invest enough to acquire leverage over the technology and thus are able to locked China and other countries out of it.
Fabrication is far easier to catch up than bootstrapping. If a country started today, they could subsidize fabrication and within eighteen months they could be making strides to deliver more chips from within. It might literally take a decade or two for a new company to build ASMLs supply chains and replicate one of their machines and that would be 10-20 with no production growth, they would just at that point be getting started.
While ASMLs machines are a critical element in the production chain of sub 7nm chips at scale however it is far from the only critical component. Reducing all of Taiwan's leadership to be about access to these machines is beyond ignorant.
It is TSMC who integrates the ASML machine into a node. A node being made of dozens of machines performing hundreds of discrete steps with right parameters in the right order. It is TSMC who build fabs with enormously short lead times.
I didn't do that. You put words into my mouth. I said the fabrication problem is easier to catch up on than the bootstrapping problem which is just obviously true. Chips are fabricated all over the world. TSMC obviously is an incredible company and has difficult challenges but the difference between them and other foundries is mostly about scale. Scale is something that typically you can invest in and pretty quickly make measurable progress.
Bootstrapping a whole new lithographic process is not. That is something you would have to invest in for years with essentially nothing to show for it. That is a harder problem from a market standpoint and a political standpoint.
But we're not talking generally about chip fabrication here. We're talking about what is going on with the investments from the U.S. and Japan in next generation chip production. And the bottom line is the investment here is made relatively unrelated to "competing" with Taiwan as a global chip exporter. The most likely thing is that the technology invested here will be used in Taiwan to fabricate chips.
Japan's original chip industry lead was created by the government funding a bunch of also-rans to cooperate (and have each participant focus on one part of the job) instead of compete (where every team is spread too thin trying to do the whole hog) https://www.youtube.com/watch?v=bwhU9goCiaI
Japan's electronic industry was build with tariffs, import restrictions, currency quotas, ownership limits, strict financial regulation, central planning and indirect subsidies.
"JAPAN AND THE BIG SQUEEZE September 30, 1990. HOW DID Japan destroy the American television industry? The secret history of that strategy reveals how Japanese manufacturers and the Japanese government first created an anti-competitive cartel ..."
"Japan's raid on the American market dates back to 1956, when the largest Japanese manufacturers formed the Home Electronic Appliance Market Stabilization Council, an illegal production cartel. The intent of the cartel was to monopolize the domestic market for television receivers, radios and other home electric products and to exclude foreign imports. Once their home market was secure, they would launch a drive against the far richer American market."
Well yeah, just like everyone else's industries. And the US government was silently encouraging it to create a Capitalist bulwark against Communism in Asia.
That doesn't change that in the mid-70's the Japanese chip manufacturers were failing, and by "pooling together a bunch of also-ran teams together" they actually made better quality chips than the US, not just cheaper ones.
Dennis Ritchie anecdote. He promoted writing functions instead of inlining the contents each time. His employees were hooked, and kept going even when the profiler reported the computer was spending most of its time processing CALLs.
planning/development for progressively lower N-nm chips seems to happen extremely far in advance - here we are talking about 2-nm when 3-nm is still around the corner.
My question is, does the development of 2-nm happen totally independently of 3-nm? Are they happening concurrently, and 3-nm just got a head start?
Do the advances made during development of 3-nm factor in to the design of 2-nm?
Or is each "N-nm" a somewhat clean slate that brings an entirely new process?
Does each fab invent its own process for N-nm? Or does the "N-nm process" for TSMC look the same as another fabs?
(replies need not say "ackshually, 2-nm is not really 2-nm". we all know this. 90% of the comments so far are about this haha)
At the foundry level, roadmap and research is always ongoing many nodes into the future. The degree of resources provided to a node is of course proportional to how close to becoming a source of revenue it is. Typically different nodes are owned by different development organizations, but they will frequently share development insights with each other. Sometimes the "more researchy" nodes come up with something so great that adoption gets pulled in closer to the current node.
Throughout this process, the foundry is in discussions with OEMs (AMAT, Lam, TEL, ASML, ASM etc), iteratively providing product requirements, and then as the manufacturing cut-in date approaches, a winning vendor/tool is selected for each layer/unit step for the node in question, and the device integration is frozen for high volume manufacturing.
> Does each fab invent its own process for N-nm? Or does the "N-nm process" for TSMC look the same as another fabs?
They all use the same OEMs, and device physics and scaling is the same for everybody, so typically they end up looking pretty similar. However often there is no clear "best" way to solve an integration problem, so you you end up with different solutions depending on the foundry -- and often those solutions are driven more by business considerations and existing supplier relationships than technical merit. In other cases somebody (historically, Intel) comes up with an idea so powerful (FinFET) that everyone else (TSMC) steals it.
The term "2 nanometer" or alternatively "20 angstrom" (a term used by Intel) has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. It is a commercial or marketing term used by the semiconductor chip fabrication industry to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption.
Yeah, it's annoying, but you can see how it happened: the nomenclature was accurate from 10µm to 32nm, and became marketing rather than measurement slowly, not all at once.
The choices are rename everything in the past according to a new objective standard (impractical), make a clean break and use a new objective standard (but what?), or just let 'nm' become some rough and increasingly useless way of indicating die density, where lower is better. So that's what happened.
Up until about 2003 or so, scaling worked near-perfectly, so geometries were essentially the same from generation to generation. This meant that one number, describing minimum feature size, told you what you needed to know. Eventually leakage current became a major issue, so fabs had to do major redesign and change the shapes of the transistors, and different foundries did this differently so that minimum feature length is no longer directly comparable.
You can still compare density: how many transistors or gates in a given area.
> You can still compare density: how many transistors or gates in a given area.
I would say that comparing density comes with its own caveats. E.g. SRAM is denser than logic, so a chip with a lot of cache could skew a density metric. I guess for a true "apples to apples comparison" of processes you'd want to implement the same design on separate nodes with a mixture of SRAM & logic.
Why is it impractical? I know nothing about CPU architecture.
Clearly we aren't measuring performance with x-nm terminology, but the manufacturing process. Can we not use transistor density per mm^2, or if 3-dimensional, by mm^3?
I think the most realistic guess would probably be a switch to some different nomenclature like G1, G2, G3 etc for generations or literally anything else like product lines (see what happened to nvidias/amds graphics card numbering, etc)
The fact that is is shorthand for "increased speed and reduced power consumption" seems to make it a perfect metric for comparing chips, especially for those of us who have zero interest in learning more than one metric to compare chips (almost everyone).
The correct figure of merit is transistor density MTr/mm² (millions of transistors per squared millimeter). In reality transistor count is not actually transistor count.
Transistors are more "3D" now, that's why a 10nm process gives you something like the same density as what a 2D planar transistor with 10nm minimum gate length would. See "FinFET" or "GAA" transistors
If you're talking about stacking entire chips, yes that's also used lots of places.
The challenge there is getting rid of the heat from those chips. If you stack lots of chips it gets really hard to get heat out of the ones in the middle.
If you are referring to 3D devices like FinFET then they are absolutely a thing and is being used pretty widely. If you are referring to stacking devices in 3D space then the power density becomes too much and it becomes impractical
Those names are relevant in planar processes (32 nm was last planar prosess for Intel). With FinFets, Gate-All-Around, ribbon fet etc. that makes no sense.
The correct figure of merit is transistor density MTr/mm² (millions of transistors per squared millimeter). In reality transistor count is not actually transistor count.
The dimensions that matter are the pitches of the transistors (i.e. of their gates) and of the first layer of metal interconnections, because these are the dimensions which determine the size of a logical gate or of a memory cell.
For state-of-the-art CMOS processes, these pitches are in the range 30 nm ... 50 nm.
See for example the table "Comparing Intel 4 to Intel 7", at:
Imagine if the food industry could get away with this in their marketing:
"10g Fiber Bars" (fiber bars actually contain only 1g of fiber, "10g fiber" just refers to the fact that it's the 10th generation of fiber bar they've created)
Why do you care what the minimum gate length is of a process? Do you design standard cell libraries? Transistor density is the only thing you actually care about, and they've continued decreasing the numbers to roughly match what you'd expect from the "nm" number with old planar transistors.
The comparison to a "10g fiber bar" doesn't work, because in the case of semiconductors the "10g fiber bar" is 10 times better than a "1g fiber bar"
TSMC has sometimes started to use eg. N7 instead of 7nm.. but it really doesn't matter.
I don't necessarily care what the minimum gate length is. But I don't like misleading marketing terms that ostensibly give you information about the product, but in reality don't give you any information at all.
If marketing wants to use some metric to convey how good the product is, that's fine, but if the only information being conveyed is the product generation number, they should not be able to masquerade that generation number as a metric. Otherwise it misleads consumers into thinking, i.e. "2nm chips should be able to contain twice as many transistors as 4nm chips!" when that is false.
Actually "2 nm" chips are supposed to have 4 times more transistors than "4 nm" chips, because the area scales like the square of the length.
Because each process generation was supposed to double the transistor density, the names of the processes have been given to correspond approximately with a geometric progression having the ratio sqrt(2): 500 nm, 350 nm, 250 nm, 180 nm, 130 nm, 90 nm, 65 nm, 45 nm, 32 nm, 22 nm, ... , but then the need to round to integer numbers combined with the desire to give distinct names to some process variants that have only small changes in the transistor density (e.g. "6 nm" vs. "7 nm") have lead to deviations from the original progression.
While the transistor density has increased with each process generation, most recent generations have been content with a less than double density, e.g. with a density 1.8 times greater than in the previous generation.
If it doesn't matter why cant they just switch to a density number that translates better than a lookup table and doesn't refer to a measure it has no relationship with anymore?
Because words have meaning, and when a measure /indicator of performance ceases to be a good one we do not simply change the definition of the words we used to us, instead we come up with something new
i.e when Ghz stopped being a useful metric (or sole metric) for CPU we switched to other measurements, we did not redefine what Ghz represented.
i.e when Ghz stopped being a useful metric (or sole metric) for CPU we switched to other measurements, we did not redefine what Ghz represented.
They really did try to muddy the waters back when CPU clock was the only commonplace metric of CPU speed the general consumer would know about though -- as a couple examples, there was the AMD K5 PR200 which was a 133 MHz part but supposedly (according to the AMD marketing team, at least) competitive with a Pentium 200 MHz -- or the AMD Athlon XP 1700+, which does not run at 1.7 GHz as you might think but only about 1.4.
They did stop short of outright calling it GHz, you're right, but clearly the intent was for the consumer to think that a Pentium 4 1700 MHz and an AMD Athlon XP 1700+ were comparable.
I don't think any consumer products get advertised this way? "Now with 2-nm technology inside" is not something anyone cares about. It's not a meaningful benchmark for a chip.
The companies that contract with chip foundries probably know what they're getting. It makes reading industry news a bit more confusing, but for most of us, we're reading it for entertainment, not any practical purpose.
It's less that they're marketing deceptively than that the measure they were using stopped making sense. If the ITRS back in the day had known we'd stop using planar transistors they could have named their nodes for the transistor density they achieved but they didn't and so we're stuck with a naming convention that no longer refers to a real physical quantity but we are still getting the density improvements per node that we traditionally have.
Decades of tradition. And it's not clear exactly what they would switch to, in general transistors double every node but different processes tend to have different design rules so different processes might give you different densities on different designs making the exact reference design a fraught question.
Gypsum water is a thing: https://en.wikipedia.org/wiki/Three_Legs_Cooling_Water I've been given this as a treatment for a cold/flu. I drank it (didn't want to be rude) but couldn't stop thinking about how gypsum used to sometimes be contaminated with asbestos (due to coming from the same mines, I think, long ago).
Like household products marketed as "not tested on animals" when all the compounds in the product were already tested on animals a century ago and the overhead for using any novel compounds would mean a vastly higher price (and probably animal testing).
Honestly to me all these comments are wrong and focusing too much on the 2nm aspect and not the possible r&d outcomes. EUV come to fruition because of r&d research between the US and other countries/companies. TSMC might be the largest foundry but a majority if not all of their tooling is from the west and japan; from ASML, AMAT, Lam Research and others. Best believe some acquisitions will happen especially on from the US side and new tech will be coming out probably some sort of EUV replacement.
Plus people get trained and knowhow gets retained. Discussion is just a step ahead to outsourcing key tech by the previous generation. Not that I want to advertise blind protectionism either.
Names are going to switch to use Å soon and by the time we get to low single digits in that we're certainly going to be needing to move to a whole new computational substrate than MOSFETs to advance.
Totally agree. Taiwanese immigration should be made as easy as possible. Create special tax benefits for people with semiconductor experience to come over and help relaunch the industry in the US. Tax breaks, citizenship, family benefits should all be on the table. I really can't think of a downside.
Consider it from Taiwan's point of view. Through hard work and a bit of luck they created a world-class golden goose industry that literally outcompetes everyone else. And then their biggest ally (or the country they thought were their biggest ally) decides to hire the talents away and gut their industry, because it's "too important to be in Taiwan."
10M Taiwanese don't want to live in some desert wasteland. All the desireable places are already full of people.
On top of that, America's politics are horribly toxic and the country is about to become a Christian fascist dictatorship that resembles the Handmaid's Tale. Why would they want to move there?
It was forgotten by media, because said media eventually got back in sync with reality. Which by is to say, personally I’d trust citizens more than western narratives about them.
I hate everything about this. The automotive morons forced semi's to shit their pants during pandemic, we have lots of supply chain issues for the whole production chain (packaging, testing, not only wafer manufacturing!) and here we push for marketing "2nm", where pushing the envelope doesnt make any sense in reality. Scale older processes, make them cheaper, goddamnit! Analog power transistors give zero shits about your nanometers, we need more 90nm and 160nm capacity.
yeah. Personally, I don't want more microchips in everything. I just want a cheap fridge, a cheap washing machine, or cheap whatever, F** the chips. Even in a car, I want the aboslute minimum amount of chips and other nonsense.
no, here you are wrong, mate. Having mechanical (or any other) control for your dishwasher or washing machine is a VERY bad idea. also for your engine timing and sensor evaluation in your car, also planes, also, basically, everything. I understand the notion where you wonder "WTF there is a WiFi in my freackn OVEN?!" and "WHY THE HELL WE HAVE LCD SCREENS HERE?!", but in general microcontroller is a FAR superior and __safer__ controlling contraption than everything else humanity have ever devised.
We went to buy a new washing machine (even though the old wasn't even broken yet). Even the salesman had to admit that they don't make washing machines like they used to and told us that it would last an average of 8 years, as opposed to much longer from the older ones.
Intel and Toshiba apparently don't see cutting edge device scale as a differentiator and so far haven't invested (enough) in matching TSMC in that arena.
There is very little chance, IMHO, that establishing and funding an independent organization to pursue 2nm will yield the desired results. Providing funding tranched on a results basis to existing firms stands a much better chance, provided that they are permitted to pursue 2 nm fabrication without sharing what they've learned, so they gain the benefit of the effort. If that were the approach, I don't see why the US and Japan need to collaborate at all.
Seems more like narrative-supporting publicity than anything else. Ugh.