
5nm vs. 3nm - Lind5
https://semiengineering.com/5nm-vs-3nm/
======
kragen
I didn't realize Samsung was offering a 5nm process to customers; this means
that TSMC is _not_ the only 5nm foundry out there, as I thought it would be
when GlobalFoundries canceled their 5nm R&D program. And it sounds like
competition might keep process shrink alive for at least one more node, down
to 3nm, although there's a good chance that either Samsung or TSMC will have
to bow out at that point.

Since Dennard scaling ended about 15 years ago, these new devices will
probably run hotter, adding to the dark-silicon, eh, let's call it a
situation. It's a _problem_ from the traditional point of view where you
expect to be able to use all your hardware all the time, but maybe it's an
_opportunity_ if you see it as a chance to handle burstier computational loads
or to pack a greater diversity of specialized cores onto a chip. But of course
that increases both design costs and the complexity of programming the device
once it's been fabbed.

The impending collapse of Moore's Law has thus been delayed for two or three
years, or softened anyway, but the appetite for computation due to deep
learning continues unabated. Since scaling Jack Kilby's planar process down is
becoming increasingly uneconomic, this would be a good time for a non-planar
process to emerge — a trillion squares occupies a one-million by one-million
area, while a trillion voxels is only ten thousand by ten thousand by ten
thousand, a scale a hundred times larger and therefore less demanding on your
fabrication processes. You'll probably need some plumbing in there for
coolant. I don't know of anybody working on this, surprisingly.

~~~
Traster
I think one of the particularly interesting aspects of Machine learning is
that the research into it has been focused on parallelism and partitioning
from day 1. The result of this is that those work loads are particularly well
suited to just scaling to more chips. It may be that we can continue our
computational climb for a while longer simply by scaling out in a way that
consumer driven products really weren't able to do.

~~~
segfaultbuserr
> _It may be that we can continue our computational climb for a while longer_

I have a random question. Imagine this hypothetical scenario: semiconductor
manufacturing comes to a halt completely, and new physics and technology is
still decades ahead, can we further improve the practical performance of
general computation by innovation alone?

I imagine, instead of riding on the train of Moore's Law, more resources would
be invested to design optimization and R&D of new architectures, e.g. faster
FPU, faster pipeline, etc.

Also, previously discarded and ignored ideas may be implemented again, and
delivers some real advancement. While exponential growth is not possible, at
least linear scaling should be the case. For example, computers without
clocks. Based on this comment on HN,
[https://news.ycombinator.com/item?id=19554248](https://news.ycombinator.com/item?id=19554248),
one major obstacle of clockless chip is the entire VLSI toolchain is designed
and optimized for synchronous logic, and this can be changed if the industry
invest some serious resources. Another example comes to my mind is high-level
programming based on hardware, e.g. Lisp machine.

Does my imagination make any sense?

~~~
gameswithgo
>can we further improve the practical performance of general computation by
innovation alone?

we can improve the performance of average programs by a factor of ~100x simply
by more careful software engineering. Don't use python or javascript, optimize
code for cpu cache hits, use more cores effectively, etc.

However games, maybe only 2x if anything.

~~~
segfaultbuserr
I was referring to the computational capabilities that hardware can provide,
not the performance of an average program. The parent article talks about
semiconductor manufacturing, you can save your rants on inefficient and
useless programs for another HN article.

~~~
BubRoss
It's not a 'rant'. This is exactly why speeding up CPUs is more difficult,
even with billions of transistors. People expect to write whatever they want
and have it run faster. 1% of the transistors in a CPU are actually running
instructions, the other 99% are trying to keep it busy.

~~~
segfaultbuserr
> _gameswithgo: Don 't use python or javascript_

It is clearly a rant to me! Don't get me wrong, I'm not a huge fan of
JavaScript, but discussing about useless programs is simply not what I'm
asking here. I'm not interested in how Google Chrome can be 1000% faster.

> _People expect to write whatever they want and have it run faster. 1% of the
> transistors in a CPU are actually running instructions, the other 99% are
> trying to keep it busy._

I agree, now, _your_ perspective makes the talk on software interesting.

Recently I've seen an article _C Is Not a Low-level Language_
([https://queue.acm.org/detail.cfm?id=3212479](https://queue.acm.org/detail.cfm?id=3212479)),
in the article the author argued that the in-order, synchronous, sequential
execution model from the PDP-11 heyday is outdated. However vast majority of
programs (i.e. C programs) are still written based on this model, so a CPU
must use a lot of resources to dispatch these sequential code and introduce
countless transparent optimizations (e.g. ILP), to make existing sequential
code faster, on the other hand, the compilers are becoming monsters because
they must be as intelligent as possible to understand the algorithms in a
program and rewrite them automatically for optimum performance on a modern
CPU. As a result of this disparity, the capabilities of what the hardware can
actually provide is often underutilized.

The author purposes that we should try discarding the PDP-11's classical "in-
order, synchronous" view of a program, and try developing new programming
languages and models that designed with the capabilities of modern hardware in
mind, such as low-level parallelism to eliminates this disparity. So the CPUs
can focus on what they are good at with less overhead of dispatching the
instructions.

------
guardiangod
I've been following fab news for close to 2 decades already. I am familiar
with all the major players in the market, but this article keeps mentioning
SMIC.

Now I know SMIC since its creation, but afaik it has always been several nodes
behind leading edge, and never scored any major contract. Without any large
production contract, how does it get enough experience to even get to 7nm?
UMC, their partner, pretty much gave up on 7nm already. TSMC has resisted all
attempts at espionage. Is it from Samsung?

~~~
microcolonel
Well, there are lots of possible factors:

a) China is turning inward, maybe they're getting some state support,
monetary, political, or otherwise; or at least benefiting from a lucky
coincidence.

b) SMIC has been credibly accused of misappropriating TSMC secrets in the
past, and settled.

c) I think a number of major Chinese manufacturing companies invest resources
and money in them, possibly as some form of insurance policy.

d) The Chinese government does procure armaments, maybe SMIC does
manufacturing they've been asked not to talk about.

~~~
bcaa7f3a8bbc
> _China is turning inward, maybe they 're getting some state support,
> monetary, political, or otherwise; _

This seems to be the most likely factor.

~~~
gvb
Yes, it is an explicit strategic policy of the Chinese government as part of
"Made in China 2025" with state support (political and monetary).

[https://en.wikipedia.org/wiki/Made_in_China_2025](https://en.wikipedia.org/wiki/Made_in_China_2025)

~~~
bcaa7f3a8bbc
A more direct motivation is recent U.S. sanctions on Chinese companies, the
state support in semiconductor section has been boosted significantly, since
decision-makers in China is now seeing the lack of state-of-art semiconductor
manufacturing capabilities a critical threat to national security. Plausibly,
developing the domestic semiconductor market to reduce foreign reliance is
going to be a major goal.

As we see, SMIC has already withdrawn from New York Stock Exchange entirely
[1], and purchased a 7nm EUV lithography machine from ASML for $120 million
[2]!

[1] [https://www.scmp.com/business/article/3011737/chinas-
biggest...](https://www.scmp.com/business/article/3011737/chinas-biggest-chip-
maker-smic-withdraw-new-york-stock-exchange-trade-spat)

[2] [https://www.anandtech.com/show/13941/smics-14-nm-mass-
produc...](https://www.anandtech.com/show/13941/smics-14-nm-mass-production-
in-1h-2019)

------
eloff
Moore's law is dead, but I don't think the story ends there. Now the
incentives exist to explore alternatives to Silicon, which could yield many
more years of big process improvements. While Silicon was scaling so rapidly
there was little point in investing in alternatives. The free lunch is over,
but the stage is set for the next act of the microprocessor revolution.

~~~
msla
I wonder if architectural diversity will increase even further than it has.

It's already increased a bit in the microprocessor world with the rise of
GPUs, but that "just" puts supercomputer-style vector processing in SBCs and
laptops. It isn't as new of a concept as systolic arrays, for example; those
can be fast even on slow hardware if you pick your problem right, just like
how GPUs are only speed demons on certain tasks. Well, if we can't beat
problems to death with increasing scalar speed, it might give us more
incentives to design ever-more-specialized chips which solve specific problems
extremely effectively even if they're comically useless for general-purpose
computing.

~~~
Nuzzerino
Ray Kurzweil has long predicted 3D computing chips would become a paradigm.
I'm not talking about GPUs, but actually vertically stacked layers of
transistors. [https://www.kurzweilai.net/radical-new-vertically-
integrated...](https://www.kurzweilai.net/radical-new-vertically-
integrated-3d-chip-design-combines-computing-and-data-storage)

~~~
eloff
This has arguably already been appearing on the graphics side with stacked
memory, but now Intel is getting serious about this approach:
[https://www.theverge.com/2019/1/7/18173001/intel-
lakefield-f...](https://www.theverge.com/2019/1/7/18173001/intel-lakefield-
foveros-3d-chip-stacking-soc-design-ces-2019)

A big problem is heat. But stacked designs produce less heat overall, just in
a smaller area. That can actually make heat dissipation more centralized and
more efficient (due to higher thermal gradient) so I think it's a solvable
problem.

~~~
Retra
I think if you can properly do a fully 3D circuit, you will probably also free
up a lot of space to interleave heat pipes within.

~~~
undersuit
Make Peltier cooling wires at the same transistor size throughout the chip to
cool the interior and dump all the heat on the outside. Peltier coolers are
supposed to be inefficient, but when you need to cool the interior of a cube a
few millimeters across you don't have many options.

------
abdullahkhalids
Suppose I have a million dollars. Could I make a fab [1] that can manufacture
a Intel 286 processor (with feature size roughly 1.5 micrometer) or
equivalent? If not, what year of semiconductor manufacturing technology could
I replicate?

Wikipedia provides this rough estimate of feature size - year table: 10 µm –
1971 6 µm – 1974 3 µm – 1977 1.5 µm – 1982 1 µm – 1985 800 nm – 1989 600 nm –
1994 350 nm – 1995

What about only a 100K USD?

[1] capital costs only. Not labor.

~~~
avs733
No.

You couldn't even buy the cleanroom building for $1m

~~~
abdullahkhalids
University labs often have cleanrooms that definitely do cost roughly on the
order of low millions. And they are built to the standards of 2010s
fabrication requirements. Replicating 1980s cleanroom requirements today would
be a lot cheaper.

Edit: Page 64 of the pdf
[https://dokumente.unibw.de/pub/bscw.cgi/d9262701/01_History....](https://dokumente.unibw.de/pub/bscw.cgi/d9262701/01_History.pdf)
suggests in 1980 the total investment cost of a fab was $100million. I would
expect that same tech could be replicated a lot cheaper today. A 100x
improvement doesn't seem outrageous.

~~~
0xffff2
I don't see anything on that slight that indicates whether the numbers are
inflation-adjusted or not. $100million in 1980 dollars is ~$300million in 2019
dollars. That's going to eat into your savings quite a bit.

I'm no expert, but a 100x savings seems pretty outrageous to me and 300x even
more so.

------
baybal2
> There are fewer foundries to choose from at the most advanced nodes

To the effect, only two. It is surprising just how fast it has turned into a
duopoly.

Samsung and TSMC are now the Airbus and Boeing of semiconductor industry

~~~
p1esk
Intel is still ahead of them, right? Last I’ve heard they are moving pretty
aggressively on their “7nm”, which is supposed to be at least as good as
TSMC’s “5nm”.

Compare min metal pitch for Intel’s “10nm” (36nm) and TSMC’s “7nm” (40nm) [1]

[1]
[https://en.wikichip.org/wiki/10_nm_lithography_process](https://en.wikichip.org/wiki/10_nm_lithography_process)

~~~
eight_ender
They've yet to master their own 10nm process let alone 7nm. I don't doubt
their processes outdo the competitors on actual die shrinkage but they've been
having a hell of a time of it.

~~~
Goz3rr
The problem with 10nm is that they were too ambitious and tried to do it
without EUV. 7nm is being developed by a different team and does use EUV so it
shouldn't suffer from the same delays.

------
js8
If I understand it correctly, 3 nm is less than ten atoms. Cannot this make
the technology much less reliable due to quantum effects and interference?

~~~
01100011
AFAIK yes. You start to run into problems because the area you're trying to
make a transistor in might not contain one of the dopant atoms which determine
the semiconductor type. I believe hot carrier injection becomes more of a
problem as well. At some point you probably run into quantum effects as
electrons just start tunneling where they're not supposed to.

On top of all that, the design rules become crazy at small dimensions. You
can't just make a sharp bend in a wire because the high frequency component of
the bend causes ringing in the interference pattern of the EUV laser.

* - I'm just a software guy with a BSEE... I'm curious what a semiconductor designer has to say about all this.

~~~
brennanpeterson
You end up with fully depleted devices to avoid dopant effects. And tunneling
loss is already an issue, and is mitigated with better work function metal
design.

Sharp bends don't happen because they cause large fields which in turn cause
dielectric breakdown. Most critical metal layers are oriented in a single
direction. 2D printing with EUV isn't really an issue.

There are three components which make the area of a cell that are used to
infer the scaling. The fin pitch, the metal pith, and the cell height (track
count). An older technology (22nm) might have a 9track, 40nm by 60nm size.
14nm would be 9T _28_ 42, 10 would be 7.5T _26_ 38, 7 would be 6.5T _26_ 38
with SDB, and 5 might be 5.5T _22_ 30

Numbers very approximate, but the key is that design compaction (which
requires major physical integration changes) coupled with reduced key pitch
shrink.

The nice part is that you tend to get more design innovation, because you are
no longer competing with shrink. And the relative cost of additional masks and
layers is low.

------
kingosticks
Can someone explain why the supply voltage for traditional finFETs can't be
taken below 0.75V as is quoted in the article? Are they just talking from a
mass production yield perspective?

~~~
Robin_Message
[https://electronics.stackexchange.com/questions/286824/silic...](https://electronics.stackexchange.com/questions/286824/silicon-
diode-threshold-voltage-0-7)

~~~
kingosticks
Thanks! But I guess my point below still stands, it works just fine, it's just
leaky.

------
Dylan16807
> IC design costs also continue to rise. The cost to design a 28nm planar
> device ranges from $10 million to $35 million, according to Gartner. In
> comparison, the cost to design a 7nm system-on-a-chip (SoC) ranges from $120
> million to $420 million, according to Gartner.

Is the comparison here a device with the same number of transistors, a device
with the same area, or something else entirely?

Because if that's the per-area cost, then the design cost per transistor has
barely budged.

------
rocqua
Why are nodes a linear step of 2nm?

It seems to me that going from 7nm to 5nm (a factor 1.4) is a smaller step
than going from 5nm to 3nm (a factor 1.6666). Not just a smaller step in
engineering effort, but also in effect size.

~~~
wtallis
It's not a linear scale, it's just that we're still rounding to integers, so
we're going from 7 to 5 instead of 4.9 and then to 3 instead of 3.5. Since
we've long since departed from having node names correspond to any physical
dimension or even a meaningful composite of critical dimensions, the only
thing wrong with sticking to simple integer naming is that we might run out of
smaller integers before we run out of ways to improve silicon fabrication.

~~~
rocqua
Ah, so its a factor of sqrt(2) on the length scale, corresponding to a factor
2 of area and (I guess) a factor of 2 improvement in transistors/cm^2. Makes
sense, thanks!

------
agumonkey
How long until you can get flagship smartphone performance under a Watt ?

~~~
sharpneli
You can get flagship from maybe 4 years ago under a watt today.

In future you can get current flagship under a watt too. But then the new
flagship with it’s 5W consumption will be so much better.

~~~
agumonkey
Only asking because under a Watt you get reliable small solar powered SoC

~~~
simongr3dal
Small solar powered like with calculators and their roughly 1” sq. solar power
strip?

That would be amazing but probably not feasible with the kind of display
people have come to expect on their smartphone.

~~~
bhouston
Maybe they can figure out how to embed a solar panel into the display. I guess
it would absorb light though so it may be hard to balance with a light
emitting display. I wonder if one can be very specific on directionality for
both emission (outward direction only) and absorption (inward coming light
only.) Logically I would think the emission layer goes on top of the
absorption layer.

This seems on the face of it possible. I do not know enough about the
specifics if it is reasonable to do.

I could see it though. Just leave your phone face up on your desk during the
day to have it charge.

~~~
agumonkey
I was just wondering about that. But light emissions would probably cancel
solar intake.

