Hacker News new | past | comments | ask | show | jobs | submit login
Report: TSMC's 3nm Fab Could Cost $20B (eetimes.com)
132 points by baybal2 on Oct 9, 2017 | hide | past | favorite | 78 comments



Is 3nm an actual measure of a distance, or is it entirely a marketing term like 3G cell phone service? I can't quite get a straight answer. As far as I could tell from outside the field it felt like at ~15nm the measure stopped being a measure of a feature, and more a measure of precision, and then a 'version' to be decremented rather than relating to a 'meter' in any way. When I look at electron micrographs of the transistors they don't appear to be 3nm in size... Anyone able to help here?

At 3nm, you get smaller than a biological protein and have features with countable numbers of atoms. And as far as my education went, quantum effects start to dominate, and bulk material properties start to (mis)behave very differently.


3nm is the physical size of the smallest dimension in the features in the chip. In 2017, the current 'node' is 7 nm. In this node, the dimension of the 'FinFET Fin Width' is 4.4 nm. [0]

It's fucking amazing to think that such microscopic features are repeatably produced at all, let alone at the scale of modern semiconductor fabrication. They can deposit layers of material measured in atoms. As in, 'Now I want a layer of copper 4 atoms thick' [1].

This table shows the actual measurements of the features of a device for each node:

http://semiengineering.com/wp-content/uploads/2014/05/Screen...

[0] https://upload.wikimedia.org/wikipedia/commons/b/bb/Doublega...

[1] https://www.youtube.com/watch?v=4G8wXQGEBrA


Correct me if I'm wrong: no one is selling any 7nm chips right now, and no one is even selling 10nm chips right now, and no one will before the end of 2017.

Also worth noting, intel's 14nm is significantly smaller than samsung's 14nm: https://forums.anandtech.com/threads/how-do-global-foundries...

So I don't think the answer is really that simple and straight-forward.

EDIT: I was wrong, Samsung is manufacturing commercially available SoCs on its "10nm" process. But in my defense, it's comparable to Intel's 14nm process.


Apple is using TSMC "10nm"


One wonders how long 3nm chips will operate. Electromigration becomes a more and more serious problem as features get smaller and you can't afford to lose many atoms from a feature. This is worse for devices that are electrically active doing something; an idle flash drive doesn't suffer from this.

Will it be possible to fab a CPU at this density, or just memory devices? With memory, you can use error correction to compensate for faults. That's hard to do in logic. (Not impossible; it's been done, with CPUs that have self-checking. IBM has done that in their mainframe CPUs.)


At some point, the feature size stopped being a measurement of any specific feature on the IC, and just a generalized metric.

But ya, I mean if you forced them to, their engineers could probably produce the formula that mixes together a bunch of actual physical feature sizes, and explain why 3nm is not a lie - but its very much a marketing thing.

One obvious hint at this is how the different manufacturer's 'x' nm nodes have obviously different performances.


I'm reminded of the days of CDROM speeds, when at one point, 2X, 3X, 4X, etc. described an actual multiple of the baseline 150 KB/sec performance. Eventually, the number started measuring only the peak theoretical speed of reads from the outer edge of the disc. It ceased to be a meaning comparative measurement and pretty much became a version number.


That was for a completely different reason. CDs had Constant Linear Velocity (CLV). This means the disc would spin lower when the head was on other tracks, and faster on inner tracks, so that the same amount of data would pass under the head every second. This was a-OK until 8x or 16x and became really insanely pointless around 24x - with the disc being vigorously accelerated and brakes as the head moved from region to region. At some point some manufacturer rightly decided “to hell with this idiocy” and made a CAV unit - Constant Angular Velocity - the disc would spin at Always the same rate, and if this means outer tracks read faster than inner ones, well, who cares. The whole industry soon followed - there was no point in staying CLV.


the fun ones were the 54x hardware that would explode the disks!


I still have a 52x drive in an old PC. It will always take about 5 seconds to spin to max (sounding like a jet takeoff), even if a single sector needs to be read from it.


It's a bit more complicated than that.

http://blog.cdrom2go.com/2011/02/can-a-cd-rom-disc-explode-i...

You can do it, but it takes some effort. CDs don't typically suffer structural failure at normal operation speeds...


A neighbor of mine actually had a disc explode in his drive. I couldn't believe it at first, but sure enough, the drive sounded like a maraca when I shook it.


My Diablo II disk shattered in my mothers PC when I was in the 7th grade, we were both rather upset (me because I lost my game disc in the era they were still used for anti-piracy, her because she needed to buy a new CD drive).


What happened to the drive mechanism? I've had a (previously damaged) CD explode and I just had to pour the pieces out.


Oh, it has happened but it's very much an anomaly.


Oh god. Their versioning system uses a unit of measurement that is decrementing towards zero. Those poor souls!


At this point a measure based on some number of average gates per square millimeter (won't that be fun to get everyone to agree to) would be better advertising, and more truthful.

Working on an areal density works better in an era where improvements are going closer to linear.


Aren't transistor densities vastly different for DRAM cells and logic? So you must choose what kind of stuff it is before "gates per mm^2" makes sense, no?


Different types of gates use different amounts of silicon area. Combining gates "cancel out" different number of transistors. So the gate density you'll end up with also depends on your specific application.

I think the number of transitors per unit area would be better. Though advanced applications mess with transistor sizes too, the situation would be better than gates.


why not just use more millimeters. if you don't invest $BNs in making smaller dies you can just sell bigger chips. and also charge more for a BIGGER and BADDER chip every year. then when it gets out of hand use a smaller process intel has abandoned already.


Price is proportional to die size. Processing one wafer is a fixed price, so the more chips you can fit on one wafer, the cheaper each individual chip.

Additionally, the number of defects is proportional to the area. The bigger your chip, the more chips you will have to throw away because of defects. E.g. say you have one defect per wafer on average, if your chip takes up the whole wafer, you will have no good chips. If you can fit 100 chips on one wafer, you will have 99 good ones and one bad one.


yes, i'm aware. but i'm sure the "processing cost" also factors in R&D to get to that node size up to that point.

you also don't need to double the die size just double the size of the package (what AMD seem to have done), that way you can swap broken dies out.


Correct me if I'm wrong, but I don't think AMD physically swaps out defective cores. I believe they're disabled individually in some kind of firmware. That's effective because the interconnect region is significantly smaller than the core area (and possibly made more reliable through feature size manipulation?). I think this has been standard practice almost dating back to multi-core introduction, where they sell high end multi-core chips as low end with some cores disabled.


Note parent was talking about die and not cores.

AMD are shipping multi-chip modules with ThreadRipper (with two die) and EPYC (with four), and then because they are separate die you can trivially swap them out.


i believe this is a way for them to maximise yield, say their threshold is at least 3 dies must be good, and only 1 can be passable, then when they do tests and only 2 are good, they can swap out one of the passable ones, they can also rate the cpus differently too, they could also just swap out dies when all fail the test, i.e with threadripper and replace with another 2, then rate those.


Heat.


cooling technologies have also moved forwards too. a stock water cooler block instead of a fan could solve that.


They can just switch to picometers whenever it gets really dumb.


The closest distance between silicon atoms in a crystal is about 235pm, there's not much room with picometers for processor design.


How do you even pass electrons through a line 6 atoms wide... :S



They already had to do it once, from micrometers to nanometers.


They must feel like Urbit users.


Maybe they can adopt an exponential versioning system. Just decrease by a power of ten every revision.


Here are my understandings:

   Today we are @ ~ 14, 16 nm. 

   When we get to 7nm,  the today's chip that is using 1 cm^2 size silicon can probably be build with 0.25 cm^2 size silicon die.  (IO pads are another factor).

   If "everything (mainly yield?) being equal ", they should be able build 4x amount of chips from the same silicon wafer.   Again, assuming IO is not an issue. 

    If the process cost, yield is similar, the new chips "can be" 4x cheaper OR they can pack 4 times # of transistors into the same 1 cm^2 area.   It can means more CPU, GPU cores, much larger L1, L2, L3 cache for the same chip size.


   When we get to 3 nm, they can build 16x amount of chips from the same 12 inch wafer. 
Or pack 16x amount of transistors into the same silicon area.

A good example is:

   * Apple A10:  https://en.wikipedia.org/wiki/Apple_A10

   16 nm:   die area of 125 mm2, 3.3 billion


   * Apple A11: https://en.wikipedia.org/wiki/Apple_A11

   10 nm:  4.3 billion transistors[6] on a die 87.66 mm2

Small die = more chips per wafer. More transistors = More CPU cores, GPU cores, etc.

So nm is definitively has REAL impact on cost of a chip and amount of features (transistors) one can pack into a silicon die. It is not a simple marketing term.


Code-formatted portion:

Today we are @ ~ 14, 16 nm.

When we get to 7nm, the today's chip that is using 1 cm^2 size silicon can probably be build with 0.25 cm^2 size silicon die. (IO pads are another factor).

If "everything (mainly yield?) being equal ", they should be able build 4x amount of chips from the same silicon wafer. Again, assuming IO is not an issue.

If the process cost, yield is similar, the new chips "can be" 4x cheaper OR they can pack 4 times # of transistors into the same 1 cm^2 area. It can means more CPU, GPU cores, much larger L1, L2, L3 cache for the same chip size.

When we get to 3 nm, they can build 16x amount of chips from the same 12 inch wafer.


Thank you! Before you did that I had resigned myself to missing that comment, it was unreadable on mobile.


"When we get to 3 nm, they can build 16x amount of chips from the same 12 inch wafer."

That's only if they are not up against pad-limited die size. Long ago people were running up against the issue of pad-limited die size, where the size of the I/O ring set the die size while the core logic ended up using less than all the available area. People were trying to figure out what extra stuff to throw into the core since it was shrinking so fast and the I/O was not. That was usually more memory, but that wasn't always useful.

So what's happening on that front these days? Are the current architectures actually able to make use of many more cores and memory without blowing up the I/O count of the chip?


But the OP asked if 3nm to 10nm is an apples to apples comparison, or if they are instead measuring something different for marketing reasons. In other words: will this "3nm" tech pack 11 times as many transistors as the 10nm tech for the same area?


you get to a point where the packaging costs as much as the die, then it doesn't matter how small it gets, you're just recouping the R&D for a smaller manufacturing process.


Feature size was always a measure of precision. It's just that one used to be able to draw artifacts 1 feature wide, while doing that nowadays seems to be useless.

Just as a comparison, the Bohr radius of an doping election in a silicon crystal is around 10nm. I don't think you will see 3nm wide transistors unless they are fin-fets.


I can't understand your answer to the question. Is 3nm a measure of distance or not? If so, what is it a measure of?


Imagine graph paper with the length of the side of a square being the feature size. 3nm. Where you shade in represents the metalization. Now imagine you have a design rule that says a metalization trace must be no less than 3 squares wide, for the sake of functionality.

That's a 3nm process. You might get away with putting to 9nm lines within 3nm of each other, or you might come up with some interesting transistor shapes that would not be possible on a larger process. But a trace would still have to be 9nm.


The problem is there are different parameters and it's possible to manipulate numbers. As far as I understand, what really matters is transistor density, and it obviously can differ for different processes even with same number. Intel of course claim they are better than others on the same numbers (see, for example https://www.extremetech.com/computing/246902-intel-claims-th... )


3nm is hugely smaller than anything else I've heard of. I know Intel is stuck at 14nm, and Samsung is at 10nm for their ARM chips (? someone correct me on that) -- could someone educate me on what 3nm chip technology means? Would it be 3x the speed density/possibility compared to 10nm chips?


Nodes are hard to compare, and pretty much everybody agrees that what Intel called 14nm is roughly equivalent to other foundries' 10nm. (Maybe some of the 10nm processes are 10 or 15% more dense than Intel's 14, but nothing like the 2x we should have with more straightforward comparisons)

And that's not even the end of the story. Intel 14nm++ is expected to be slightly LESS dense than their previous 14nm and 14nm+, to alleviate some of the problems that start to appear with such small nodes.

Let's not even talk about EUV processes, that will be needed to go under 7nm (IIRC). We are not even sure they can be used for mass production. Probably they can, but there are still a lot of things to fix in this area. 10 years ago it was expected to be in mass prod today or even 1 or 2 year ago - and it is still very far from ready. Fur sure they will be crazy expensive too, so tons of chip will continue to be produced on processes with bigger nodes.

So talking about 3nm now is bound to not be extremely precise, given all the unknown. Its dubious it will come as soon as 2022. It will be crazy expensive, but we already knew that.


(this is my ignorant understanding of how CPU sizing works)

Process size isn't the end-all be-all stat to follow for CPU manufacturing sizes. While Intel may not be leading here, they are leading when it comes to feature sizes.

https://www.extremetech.com/computing/246902-intel-claims-th...

The idea being of how large the features of the CPU are to make up the building block for parts of the system. While Intel's process may be larger, they end up getting smaller overall chips because they can still get more "Features" into a smaller area.

Also, Most CPUs aren't made at a single process size. They will mix 2-3 generations of process sizes when producing CPUs, and only put the hot-path parts of the CPU in the newest process size (to help improve yields).


> Also, Most CPUs aren't made at a single process size. They will mix 2-3 generations of process sizes when producing CPUs, and only put the hot-path parts of the CPU in the newest process size (to help improve yields).

No, you can't mix different processes on one wafer.

I think you mean that not everything on a CPU is at the minimum viable size of that process.


Thanks. Would you say this article describes the whole situation well?

http://wccftech.com/intel-losing-process-lead-analysis-7nm-2...


This is an announcement for construction of a "5 or 3nm fab as early as 2022." So, five years out, minimum, and the node size is a moving target.

The title is a bit clickbaity.


I found some side concerns interesting:

- an earlier node was delayed due to environmental permitting

- this project will require a lot of land

- the Taiwanese government is committed to keeping this fab at home, and willing to work with TSMC on the environmental issues.


Yeah, I think it'll be important from the perspective of the... tenuous relationship between PRC and ROC. In that sense, there is almost no environmental stress that wouldn't be worth overcoming.


Would it even work I think is the more meaningful question. Last I heard we we having trouble with electrons tunneling across the gates as we got smaller.


You have to start wondering if they will use quantum behaviour like Anderson localisation that sets up a standing wave effectively stopping electrons from tunnelling in certain places (design it to bias against and/or stop gate leak tunnelling). Svitlana Mayboroda discovered a lanscape function that allows you to predict (and hence design) this kind of behaviour. As to how actively using these kinds of quantum behaviour affects feature size / speeds / feeds / yields will probably eat several tens of millions / years as well.


That's just one of many problems…


I think 7nm exists somewhere, but I could be wrong.


It exists as a buzzword. Nothing about the TSMC 7nm node means actual 7nm gate widths. It will still only be about as good as what Intel is calling 14nm++.

The simple reality is that all these fab companies have some recent fab tech, that will be roughly the same across all of them, with the exception of Intel usually being slightly ahead of the rest. The names they give them are pointless now, though.

Since the nm doesn't really matter to consumers outside purchasing individual hardware components themselves, just getting the newest parts is all you can really care about. You look at the performance, power draw, and cost and make a judgement from there, and things like "Intel 14nm vs TSMC 7nm vs GF Finfat 10nm or Samsungs whatever" are meaningless marketing drivel.


7nm chips (TSMC manufacturing for Qualcomm) are in the pipe for 2018 chip launch (so maybe 2019 in phones).

http://www.androidauthority.com/qualcomm-drops-samsung-to-wo...


As for end product realization compared to current gen stuff, it'll be about the scale of difference between the XBox360 to the XBox One.


Don't get stuck up on the numbers, they are all marketing numbers.


One wonders how "real" this announcement is and how much of it is positioning. The press releases from TSMC isn't very informative[1].

As interesting as it is to consider that someone might actually be putting money on the table today, given the pains people seem to be having with the 7nm node I would not expect to see even a 5nm node until 2022 - 2023.

That said, if they do get to a 3nm node, assuming that actual circuit elements are 3 - 9nm that is still a lot of billion transistor chips on a wafer. I'm guessing 30% of the wafer would be consumed with die pads rather than actual chip :-)

[1] http://www.tsmc.com/uploadfile/pr/newspdf/THWQGOHITH/NEWS_FI...


GlobalFoundries was estimating $14-18b would be needed for the next generation of chip fabs: https://venturebeat.com/2017/10/01/globalfoundries-next-gene... Their CEO notes that the 3nm or 5m numbers being tossed around aren't really too meaningful, but the budgets speak for themselves.


Side note, for anyone interested in how chips are produced, this is one of my favorite videos:

Indistinguishable From Magic: Manufacturing Modern Computer Chips

https://www.youtube.com/watch?v=NGFhc8R_uO4


Atomic radius of Silicon atom is 0.11nm, which results in width of about 0.22nm. So even tightly packing atoms will make the barrier only ~13 atoms thick. Van der Waals radius is about twice that, resulting in 7-atom-wide barrier. Quantum tunneling [0] is apparent at 3nm, and gets worse from there, so I don't see how they would be preventing electrons leaking through the barrier, unless "3nm Fab" would have a shovel of marketing salt to boot.

[0] https://en.m.wikipedia.org/wiki/Quantum_tunnelling


None of the technology node names have corresponded to actual physical gate length since probably the 45nm node.


Transistors have always leaked. The main thing is Ion/Ioff > 1. The higher the ratio the better but with digital one can get by with surprisingly little gain and still get useful circuits.


you can grow a layer of silicon on top of something with slightly more compact lattice


The eetimes article is just a short summary of the actual article at https://www.bloomberg.com/news/articles/2017-10-06/tsmc-read...


I thought this article was more interesting, and also has interesting comments:

https://www.eetimes.com/document.asp?doc_id=1330971


3nm isn't even on the international semiconductor roadmap (that stops at 5nm, and 4nm half-node) How can they build a fab for a process node that isn't even designed yet? Has anyone even produced prototype chips at this node yet?


The International Semiconductor Roadmap was properly tossed out the window at the 28nm node. In the past year they're more or less retconned the industry's current node system _into_ the road map.

Modern pitch measurement is more a marketing term then a _real_ measurement of engineering precision. A smaller/newer value is roughly equal to 1/2 power consumption, it no longer implies 2x density.


So am I correct in understanding that Moore's Law marches on? If we see 5nm in 2020 and TSMC is seeing 3nm in production in 2022, this is on track, correct?


PSA: 3nm doesn't actually mean ANYTHING except that it's smaller than 5nm (and larger) nodes from the SAME FOUNDRY. If you want a general benchmark now, TSMC "7nm" ~= Intel "10nm". Note that this isn't because Intel is pious and searching for the true node name or anything, their nodes used to be less dense than the industry standard (back above ~45nn) but just turned out denser now.


Why don't these numbers ever list the width AND height AND length... I'd really like to know how many atoms each transistor was composed of.


And who is making the lithography equipment for TSMC for its "3nm" Fab? I guess ASML?


This is in Intel's terms 5nm.

We know real Intel's 10nm / TSMC 7nm has finished and is matter of yielding.

We know TSMC Intel 7nm / TSMC 5nm is pretty close to complete. This is coming out to market in roughly 2020.

We know 3nm is coming in 2022 / 2023.

But what comes after 3nm?

Will we need some material science breakthrough? Process and Material that can run at 10Ghz with the same power usage.

More transistor hasn't given us more performance. IPC, Core Count, Clockspeed, Special Instruction Set (Its funny how we swing from RISC to CSIC again ), and larger cache. It seems we have reached a plateau where we cant have more performance from CPU Hardware. GPU is different since it scaled very nicely with transistor count, and is more limited by bandwidth.

And fast, simple, high performance, easy to programmed for Programming languages + framework hasn't really come along.

But cost for Fabs, Wafer and Designing is rising.

Or have we reached a stage, performance no longer matter for majority of people?


we're merely hitting 1.2ghz mean use and 4ghz peak usage. 10ghz is very fast.


3nm. Wow. Synchrotron, or laser tin vaporization soft X-ray source?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: