
What Transistors Will Look Like at 5nm - Lind5
http://semiengineering.com/going-to-gate-all-around-fets/
======
ksec
>At 5nm, it will cost $500 million or more to design a “reasonably complex
SoC,” In comparison, it will cost $271 million to design a 7nm SoC, which is
about 9 times the cost for a 28nm planar device, according to Gartner.

So basically with every node improvement, the "design" cost increase at around
50% - 100%. This heavily flavours Apple's model. You will have to sell a lot
more SoCs to balance out that $500M design cost for leading node. Apart from
the design cost, does anyone know if the Wafer cost increase as well? And at
what rate does it increase per node?

Thinking this in Intel's scale, PC Industry is shrinking in unit, the CPU die
size is also getting smaller to keep up with Intel's profit margin. at what
point does the cost increase sway the balance to other Foundry's flavour, when
Intel 's unit sales takes longer to recoup the cost then TSMC.

~~~
Andys
I agree (re: favoring Apple's model). Whether server or mobile, there'll be
fewer SKUs, and those that exist need to be blockbuster hits.

It may well be that economics is the limiting factor stopping Intel's long
run, not the physically limiting properties.

~~~
digi_owl
Same as with "peak oil" then. Its not that we run out of physically
extractable oil, but that the cost of doing so will curtail our current usage
of the substance.

Its pretty much the inverse of the Jevon paradox (or maybe that everything has
a bell curve, and we are so focused on the left half we forget its mirror on
the right).

~~~
hga
Actually, people are looking forward to a long period where a certain node,
"20" something nm (not Intel's 20 something), becomes so standard that
economies of scale result in it being the one most entities use for their
designs, especially non-mobile or otherwise seriously power constrained.

I gather we're already sort of there, with wafer costs in the $5,000 or so
range, the trick is of course to limit the NRE to something you can afford.
The lowRISC project is going this route, I learned much of this starting from
their future production plans.

------
brian-armstrong
The lattice constant for silicon is about half a nanometer, right? So that
means that this new "gate" is only about 10 silicon atoms across?

That's a very impressive feat. It's hard to imagine CMOS continues scaling
down beyond that point.

~~~
amelius
It makes me wonder why we really need such a huge lattice (the silicon bulk)
really. Can't we somehow deposit small pieces of lattice in places where a
transistor is needed?

~~~
6nf
There are alternatives but they are difficult (expensive) to work with and not
yet required so we keep using silicon for the 5nm node

~~~
hyperpallium
until peak silicon

 _EDIT_ the analogy with peak oil is not literal, but that just as
uneconomical sources of oil (and of energy) became more economical when the
price went up, the parent's "alternatives" will become more economical when
present silicon technology gets more expensive (due to the lattice size for
example, as discussed). Just as with oil, it doesn't necessarily mean we'll
give up silicon. Though we might.

~~~
jrockway
28% of Earth's mass is silicon, so I'm not too worried.

------
dogber1
I do not understand the proclaimed 3X increase of design cost per finfet node,
particularly in the context of digital ICs. Most cell designs are highly
repetitive, so the increased design complexity should only add a one-time
offset to the total chip cost. Hence, the total design cost should be
comparable to previous nodes, i.e. well below 2X. I understand that purely
analog designs are a different beast, but it doesn't make any sense to use
advanced nodes for these in the first place (ft/fmax drops as a result of
higher-than-linearly scaled CGS).

~~~
bravo22
The "design" cost quoted mostly means mask cost. It gets very expensive to
produce for smaller nodes.

~~~
adwn
> _The "design" cost quoted mostly means mask cost._

No, mask costs are only a very small part in development and production of
complex ASICs. Some sources:

1) First graph in [http://electroiq.com/insights-from-leading-
edge/2014/02/gate...](http://electroiq.com/insights-from-leading-
edge/2014/02/gatech-interposer-conf-amkor-globalfoundries/)

2) First graph in
[http://www.eetimes.com/author.asp?doc_id=1322021](http://www.eetimes.com/author.asp?doc_id=1322021)

~~~
bravo22
That's the recurring cost of the mask set, because they don't last forever.
The setup cost of the mask however is very high.

As for logic design effort, verilog is verilog. Floor-planning and P&R is a
bit more complex at lower nodes but that's mostly software. Also TSMC offers
you standard cells to pop into your design.

Therefore most of that "design" cost is in the foundry making the fist set of
masks for you, not the incremental mask cost. They're recouping their
investment in the process, thus the price is extremely high for newer nodes.

~~~
adwn
Do you have a source or some links? I would like to know more about this.

~~~
bravo22
It is mostly based on my experience working as an ASIC designer and having
taped out several chips.

However, you can consider it logically -- the engineering effort to design the
logic, and perform place and route doesn't change much from node to node.
You're doing the same work, with the same software; albeit with new libraries
provided by your fab.

The cost clearly correlates with smaller nodes because they charge you a ton
to do the "setup" for you, i.e. make the first set of masks. Older nodes are
now much cheaper than they were because more shops are using them, thus
spreading the amortization costs.

AFAIK, a lot of cortex-M ARM chips (such as STM32F) are made on 90nm nodes.
There, the variation you offer your customer is important so they want lower
mask costs to make as many variants as possible. The core itself is so small
than going to 28nm wouldn't offer much savings because a bulk of the cost is
in packaging, testing, and at 28nm would be the amortized mask cost.

------
n00b101
_“My current assumption is that 5nm will happen, but it won’t hit high-volume
manufacturing until after 2020,” said Bob Johnson, an analyst at Gartner. “If
I were to guess, I’d say 2021 to 2022. " ... In R&D, chipmakers are also
looking at 3nm and beyond, although it’s unclear if these nodes will ever
happen._

What this all seems to be pointing to is that, in all likelihood, the "party"
will be over within the next 3 to 6 years. It seems to me like there is an
insufficient level of panic happening in the world over this issue (the
imminent end of Moore's Law after 30+ years of non-stop exponential growth).
Moore's Law was a massive, earth-shattering, 30+ year continuous exponential
curve, that we all have been riding. I think the collapse of this curve could
have serious implications for more than just chip designers and manufacturers.
In my view, some groups that seem over-exposed and under-concerned are the
software and startup industries.

For the past 30+ years, the market has seen better/smaller/bigger/faster
silicon computing devices arriving with unrelenting regularity. Now that will
suddenly stop with one final, ultimate silicon product generation. After
which, it sounds like we don't get faster or cheaper CPUs, or bigger CPU
caches, or more CPU cores, or more RAM, or better or cheaper GPUs, or bigger
or cheaper SSDs, or better/cheaper LED lights or OLED screens, or
better/smaller/faster/cheaper smart phones/tablets/digital
cameras/sensors/broadband networking, etc? So the whole show just stops, like
a high-speed train hitting a wall. And, by all credible accounts, this
cataclysmic event is scheduled to occur within the next 3-6 years (barring
some unforeseen, miraculous scientific revolution)!?

Once that happens, all computer technology seems destined to a fate similar to
many other commoditized, industrial age technologies. A famous example is the
design and performance of commercial aircraft, which has not fundamentally
changed in nearly a century. Another example is steel manufacturing - nothing
about the cost or other performance dimensions of steel have changed since the
industrial revolution. Steel is now a base commodity, and no one is waiting to
upgrade all of their steel structures to some next generation steel alloy that
will justify the massive expense of a complete "hardware upgrade." Other
examples include electro-chemical batteries, internal combustion engines,
rockets, etc.

If this truly is an end-game within the next 4-6 years, then there will be a
few niche, interesting areas to eke out performance gains within (let's say)
the next 20 to 30 years (e.g. through specialized hardware designs and new
software optimizations), but these gains seem to be very different from the
economic miracle of the endless PC/server/smartphone/software upgrade cycle
that everyone has become so accustomed to. No such "eke out optimizations"
will be able to compensate for the massive, negative economic impact of
Moore's Law ending, considering that our global economy seem to be
fundamentally dependent on technology consumption cycles stemming from Moore's
Law.

This situation reminds me of the 2008-2009 global financial crisis, where US
sub-prime residential real estate prices had started falling in 2006 but
everyone kept on ignoring the problem and hoping that it would just go away.
It also reminds me of global warming, where the consequences are dire, and
there is no solution in sight, but everyone is hoping that some new
fundamental scientific discovery will revolutionize the energy industry. It
_could_ happen ... we could possibly discover a fundamentally new energy
technology that replaces conventional fossil fuels, electro-chemical
batteries, etc. And we _could_ possibly discover a fundamentally new science
that replaces "Complementary Metal–Oxide–Semiconductor (CMOS)" silicon
technology. On the other hand, perhaps the laws of physics do not admit such a
possibility? Or worse, even if it is physically possible, what if our global
socio-economic system should collapse long before we are able to fund the
necessary discovery? For example, is it possible that Moore's Law stalling
within the next 5-10 years could trigger a financial crisis so massive that it
economically ends Moore's Law for good, before the research can be funded to
solve the problem?

~~~
adwn
Your proposal for immediate panic is an unnecessary overreaction.

> _it sounds like we don 't get faster or cheaper CPUs_

The progress in desktop CPUs has been a near-plateau for several years now.

> _bigger CPU caches_

Diminishing returns have been hit several years ago.

> _more CPU cores_

Dito for most use cases.

> _more RAM_

Dito for most use cases.

> _better or cheaper GPUs_

GPUs have become so fast and cheap that integrated GPUs are good enough for
most use cases.

> _better /cheaper LED lights or OLED screens_

I don't think they are related to the semiconductor nodes of digital logic?

> _So the whole show just stops, like a high-speed train hitting a wall._

No, the train has already started slowing down several kilometers away, and
most people on it have realized it. There will be no wall-hitting.

> _Steel is now a base commodity_

Steel may be a commodity, but the steel-using industry (in this analogy, the
software industry) is alive and well.

> _no one is waiting to upgrade all of their steel structures to some next
> generation steel alloy that will justify the massive expense of a complete
> "hardware upgrade."_

This is actually a good thing. Besides, the sentiment "no need to optimize,
just wait a year and computers will be fast enough" died over ten years ago
and was stupid to begin with.

> _our global economy seem to be fundamentally dependent on technology
> consumption cycles stemming from Moore 's Law_

It's really, really not. At least not outside of the SV-startup-scene
cognitive bubble.

~~~
n00b101
If I can summarize what you are saying, we have been experiencing early
symptoms of end of Moore's Law for the past 5 to 10 years, and nothing drastic
has happened yet, so there's nothing to worry about?

Consider the possibility that such a narrative is simply describing a "slow-
motion train wreck" as opposed to the absence of a "train wreck?"

There is no doubt that the slow-down has been occurring for several years. One
turning point was the collapse of Dennard Scaling around 2005, which led to
stalling CPU clock speeds and the "multi-core crisis." I know that CPU clock
speed has plateaued over a decade ago. I have not tracked costs, cache sizes,
and other factors as closely, but comments here seem to be confirming that
many factors have been stalled for the past 5 to 10 years (somewhere between
the failure of Dennard Scaling and 28nm nodes). There have been other slow-
motion slowdowns happening as well, including the collapse of PC sales growth
and now the emerging collapse of smartphone sales growth. The fact that this
has been happening for "a while" does not make me feel any more at ease.

To be frank, I am less confident in your comments regarding the dependence (or
lack thereof) of global economic growth on technology consumption. I beg to
differ that it is just a "SV-startup-scene cognitive bubble" ... I'm pretty
sure technology is systemically important to global economic growth.

~~~
skybrian
The thing is, we're not actually all in the same industry. Companies that sell
phones might not be so happy if sales level off or decline, but others will do
fine. On the software side, there are plenty of ways to innovate without the
hardware getting better.

If people stop upgrading phones, all the companies writing software to run on
phones can still do their thing. Uber and AirBnB aren't really going to care;
their apps already work. New businesses can come along based on mobile phone
apps that put today's hardware to good use, even if the specs don't get any
better.

Or looking at entertainment, the game industry doesn't actually depend on
better hardware. If the VR thing turns out to be a fad, game designers will
still manage to come up with innovative new video games for existing
platforms, and people will keep playing them. (Minecraft doesn't require the
latest hardware.)

Also, the design of phones can improve even if the raw hardware specs for CPU
or memory aren't any better. You can see hardware innovations in other areas
(like new sensors). Most recently, the fingerprint sensor is a good
improvement.

There are all sorts of opportunities for innovation and refinement that have
nothing to do with Moore's law.

~~~
jacquesm
When phones were wired you didn't 'upgrade' your phone for years if not
decades.

All the end of Moore's law indicates is the end of the free lunch: free
performance increase for software without any work on that software.

So instead of 'doom and gloom' I predict a _very_ healthy re-surgence in
reduction of bloat and efficiency, something that as far as I'm concerned
can't happen fast enough.

FWIW my cell phone is 6 years old and as long as it works I'll happily use it.
If a phone is a status symbol or a fashion accessory then there are plenty of
ways to get people to upgrade even if the underlying tech doesn't change so
I'm quite sure that manufacturers will find a way to re-package the same tech
in ways that allows them to sell into the elective replacement market.

------
Razengan
This may be slightly unrelated but I'm curious; have _curved_ silicon dies
ever been considered? Say like hollow rings or spheres? Would there be any
benefits regarding heat or speed etc.?

