
Will 7nm And 5nm Really Happen? - matt42
http://semiengineering.com/will-7nm-and-5nm-really-happen/
======
Filligree
Nothing in there about how they'll do 1.5nm. That's understandable; it's a
hard problem, and I imagine any solutions they've come up with will remain
trade secrets for some time.

I've heard stories, though. Scanning-tunneling microscopes on fire off the
shoulder of Orion. Massively parallel arrays of atomically precise probes,
used to build CPUs atom by atom. Things you wouldn't believe.

I'm sure it's coming one day, but once you've gotten that far, how far away
are you from outright nanofactories? Which are science fiction, and therefore
can't happen.

~~~
vorbote
Love the reference, but I'm sure all young phillistines that hang out in HN
have missed it completely. :-( You have my upvote.

Time to die.

~~~
yzzxy
This is hardly an obscure reference... I think you underestimate the pop
culture literacy of young programmers.

~~~
noir_lord
Shhh allow them their "programmers these days" elitism.

My 25 year old non-techie gf would have gotten that reference.

Blade runner is hardly an underground cult classic.

~~~
xcntktn
Blade Runner is different because of the attention it received due to the
"Final Cut" version that came out in 2007. If it hadn't been for that,
references to it would probably be a lot less recognizable with the younger
crowd. Case in point: I recently made a "2001: A Space Odyssey" reference in
front of a room full of 20-something programmers and not one person picked up
on it. It wasn't even that obscure -- it was a picture of the monolith, one of
the most recognizable images from the movie (and in science fiction in
general). The people in the room generally just hadn't seen or even heard of
(really all that is required to get the reference) the movie.

~~~
nullc
Good luck finding a copy of it with the original film noir-esq voiceover.

~~~
sp332
It's expensive, but not difficult to find. [http://www.amazon.com/Runner-Five-
Disc-Complete-Collectors-B...](http://www.amazon.com/Runner-Five-Disc-
Complete-Collectors-Blu-ray/dp/B000UBMWG4) (The version without the voiceover
is "the Director's Cut.")

------
baldfat
We are quickly coming to an end of an era where hardware can compensate for
sloppy code and processes. We are heading back to the days of 640k barrier or
the old C-64 days. It was amazing how efficient programs were and the genius
of the software engineers.

We might even get surprised by the code of something like Elite. An incredibly
brilliant piece of software that did so much more then what was thought
possible.

~~~
jerf
"We are heading back to the days of 640k barrier or the old C-64 days. It was
amazing how efficient programs were and the genius of the software engineers."

I think software is already getting there, since we've been dealing with
slowing returns for a while. The previous generation of languages like Python,
Perl, Ruby, etc. traded convenience for performance and waited for hardware to
catch up. It was a good trade in many cases at the time, but as we
collectively learned more about the design space I think it has become clear
that it is not an intrinsic trade; it is possible to get a more convenient
language that we had in the 80s or 90s without trading away performance.

LuaJIT was an early example of this, but the flood gates are opening now. Go
is not architected to the nth degree for speed, but it's a language that's
only slightly slower than C, and in my experience, only slightly slower to
develop in than Python (possible with a crossover point as the program size
grows where Go is simply faster). Rust ought to be a lot easier to work with
than C++ once you learn it, and I expect it to reach near-optimal speeds,
possibly even beating C/C++ in practice as it will be more feasible to perform
some aggressive optimizations for multithreading. And I've noticed a lot of
the other little bubbling languages that may become languages of the future,
like Nimrod, have the same sort of focus in them, "how can we get these
convenient features _without_ a 20-100x speed penalty"?

Then, once you have these languages, I'm noticing that in many cases while
bindings exist, entire stacks are being rewritten to be simpler and faster. Go
has its own webserver. Dollars-to-donuts Rust will too in another couple of
years. I suspect this is actually part of the trend; rather than binding to
older, huge frameworks and code written without much concern for latency
issues, etc, more code is going to start being rewritten to care more about
those things. Between mobile pressing us on one end and desktop speed
advancement stalling out on the other, there's increasing motivation to write
faster code where it matters, without necessarily having to use C or C++.

I'm not saying paradise is incoming, but I get the impression that we're
seeing more care about performance manifesting in real languages and code than
we used to.

~~~
mreiland
Pretty much every competitor I've ever seen for C/C++ has always made the same
claim: with X optimization that C/C++ can't do, we can eventually meet or beat
C/C++.

It's never actually came true in the general case. _ever_

~~~
jerf
You're thinking of the previous generation that I was referring to. "We can
create languages that do whatever we want, but Sufficiently Smart Compilers
will make it as fast as C!" This has become a joke. Justifiably.

The new generation that I'm referring to is more like "Well, _that_ didn't
work. Let's design nicer languages, but think about performance impacts up
front this time." Go is pretty fast _now_ , for instance... not C-fast, but
not Python-slow or Javascript-slow; it's closer to C than those, even on a log
scale. Lua-JIT is an early example, where I believe the design of Lua was
fundamentally based on what could be done quickly, rather than what could be
done nicely. You can still have nicer languages that incorporate our years of
progress since the 80s/90s, but if you think about performance from day 1,
they can also run pretty quickly, too. They may not be _quite_ as nice as the
scripting languages, but then, we also know ways of making up for that too, so
in the balance I like them better even so.

(And A: Yes, Javascript is _still_ slow, even after all the browser work. It's
just "not as slow as it used to be"; consider, if Javascript was so fast, how
does asm.js post such improvements over it? Answer: JS is still slow. And
asm.js is still about 2x slower than C, last I knew, after all. B: I no longer
believe "languages aren't slow, only implementations are", as, proof-by-
construction, the last 10-15 years produced plenty of languages that appear to
be, yes, fundamentally slow. Those who wish to argue may produce your choice
of general-C-speed compiler or interpreter for Javascript, Python, Perl, or
Ruby. After I-can't-even-guess how much effort has been put into speeding
these up, I say I'm allowed to draw conclusions.)

~~~
mreiland
No I'm not, those new-fangled compilers with that fancy ass "think about
performance up front" language doesn't have the umpteen years and billions of
dollars worth of research put into it the way C++ has.

~~~
derefr
If you want all the advantages of C++ compilers, you can write a backend for
the language's compiler that generates C++. If the language is actually
performance-focused, there shouldn't be any semantic mismatch in doing so; for
example, every Rust type should be able to be translated 1:1 to a (much more
syntax-laden) C++ type.

The new-generation languages (of which Go is a bad example; Rust and Nimrod,
and now Swift, are much better ones) don't attempt to get the computer to do
the kind of magic at runtime (garbage collection, type reflection) that made
previous-gen languages so slow.

Instead, these NGLs start with the assumption of C/C++ runtime semantics, and
then, through extra work done _at compile time_ (e.g. type inference,
ownership-tracking) clear away as much implied/redundant/unneeded syntax as
possible.

~~~
mreiland
> If you want all the advantages of C++ compilers, you can write a backend for
> the language's compiler that generates C++.

Well it sounds as if it's as simple as being able to take advantage of
optimizations to meet or beat C++.

good luck with that.

Also, FYI: Nimrod is garbage collected, which goes counter against your
reasoning.

~~~
didroe
It's much simpler than that. It actually makes writing a compiler easier as
you have higher level abstractions to work compared to generating machine
code. There are many languages that compile to C which should prove to you
that it can be done. I don't see why targeting C++ would be any different and
I'm sure there are languages already out there that do.

~~~
Someone
You can compile any language down to C or C++, but that doesn't mean that it
will run as fast as well-optimized C/C++.

The moment you include dynamic dispatch, auto-conversion to big integers, or
one of a zillion of other high-level features without also introducing ways
for the programmer to indicate how the compiler should implement them, you
give up being just as fast as C/C++.

Yes, the difference may be small, and spending time on improving your compiler
can make it smaller and smaller, but for any 'real world program', it won't be
zero.

The same is true for C vs assembly, but there, the difference mostly _is_
small because C has none of those high-level features.

Also, few people have the skills and the time to wrote well-optimized C/C++.

~~~
mreiland
bam, nailed that shit, especially the point about performance costs inherent
in a feature set rather than in a language.

I just didn't want to continue a conversation with someone who thinks
adjusting the age old "optimizations will eventually make it as fast" to be
"compiling to C/C++ as an optimization will eventually make it as fast" was
somehow going to magically make it come true.

It's the same old argument reskinned, and as you pointed out, the feature set
has a lot to do with it.

~~~
derefr
I said nothing about "compiling to C/C++ as an optimization." I said
_targeting C++ 's semantics_ (i.e. having only features C++ already has) makes
a language fast.

~~~
mreiland
[https://news.ycombinator.com/item?id=7921511](https://news.ycombinator.com/item?id=7921511)
> If you want all the advantages of C++ compilers, you can write a backend for
the language's compiler that generates C++.

------
zw123456
The drive to smaller gate structures is because a MOSFET transistor gate is
essentially a capacitor, and as such, it takes time to charge and discharge
the capacitance of the gate. So to further increase the performance, one must
reduce the capacitance of the gate. The gate is basically and RC circuit. The
easiest way to reduce the capacitance is to reduce the physical size of the
gate. The size of the gate is directly proportional to the capacitance. Once
you get down to a few atoms wide, which they are approaching now, then quantum
effects kick in and it starts getting impractical to go further. The
alternatives for the next generation have to do with attacking the R part of
the equation which is what they mean when they talk about electron mobility.
Other approaches that are unrelated to the traditional MOSFET are being
researched, but there is such a huge amount of IP based on MOSFET any move
away from that will incur huge cost in porting over. (think of it as the
hardware version of switching to a new programming language). I am not sure
what the next gen answer will be as it seemed the author did not either (which
tells me he knows what he is talking about).

~~~
rdrdss23
I've heard that the whole quantum tunneling stuff is in practical terms
bullshit. While it's a real effect (and physicists looove to talk about it),
it's virtually irrelevant b/c the overwhelming issues is parasitic
capacitances. When you have a 3Ghz clocks, _everything_ acts like a capacitor
and you have current leaking all over the place (the smaller the circuit the
closer all the elements are).

So transistors in modern processors don't end up switching consistently. There
is all sorts of error correction to compensate for that, but it only goes so
far.

~~~
sigterm
The gate leakage current was a serious concern until the High-K gate
dielectric was found. It increased the gate dielectric thickness so tunneling
current is reduced significantly (exponentially). Nowadays the main problems
are subthreshold leakage and dynamic power. Here's a good article about it:

[http://spectrum.ieee.org/semiconductors/design/the-highk-
sol...](http://spectrum.ieee.org/semiconductors/design/the-highk-solution)

~~~
rphlx
> the main problems are subthreshold leakage and dynamic power.

This is exactly correct. Even at 28nm, many SoCs are intentionally using less
than the max manufacturable transistor density, due to dynamic power/thermal
constraints.

SRAM is scaling poorly too. It makes up 50-60% of many SoC designs yet at
1500MHz+ its density (Mb/mm^2) looks likely to increase just 1.1X between 28nm
and 16nm.

Desktop CPUs are not going to get 32MB on-die caches any time soon. At least
not without eDRAM.

------
AshleysBrain
I find it just amazing that modern chips already have parts that are just
thirty or so atoms across [1], and that there are plans to go even smaller. I
guess you can't go much smaller than 1.5nm because that's, like, three atoms,
right?

[1]
[http://en.wikipedia.org/wiki/14_nanometer](http://en.wikipedia.org/wiki/14_nanometer)

~~~
tfgg
1.5nm is more like 10 atoms across, given that, for example, a C-C bond is
0.154 nm. Not that that helps tremendously :)

~~~
Denvercoder9
I don't think we can actually make chips out of carbon...

~~~
tfgg
It was an number for the length of a chemical bond that I had easily to hand,
other bonds aren't significantly different, definitely not 5 Angstrom.

Also, people working on graphene and related materials are trying damn hard to
make chips out of carbon ;)

~~~
josaka
Unit cell is 5.43 Angstroms, according to this site: [http://hyperphysics.phy-
astr.gsu.edu/hbase/solids/sili2.html](http://hyperphysics.phy-
astr.gsu.edu/hbase/solids/sili2.html). Of course the bonds by which the cell
is constructed are smaller.

------
jmpe
How i learned to stop worrying and love the miniaturization:

[https://www.youtube.com/watch?v=bKGhvKyjgLY](https://www.youtube.com/watch?v=bKGhvKyjgLY)

Seriously, watch it. Memristor-tech relies on this kind of miniaturization and
can provide a speed boost in several areas in current architectures.

Secondly, having worked for semi: there's a lot of conservative force holding
back development. We could have had current tech with much less worries than
we have now if they didn't respond so allergically to everything that looks a
little exotic in the CMOS process, like high-k dielectrics.

~~~
kken
HP and the Memristor is a good PR story. Unfortunately there is not too much
beef behind it.

The Memristor is actually the same as an RRAM element (Resistive Random Access
Memory). Companies other than HP have started working on it long before and
are significantly ahead. For example Micron recently presented a multi-gigabit
prototype chip. But there is still a lot to be done. HP lacks the funds,
manpower and manufacturing muscle to really get anywhere in this area.

~~~
mikeyouse
> HP lacks the funds, manpower and manufacturing muscle to really get anywhere
> in this area.

They might lack the political will, but HP has ~$15B cash on its balance
sheet, 350k employees, and tens of billions in fixed assets.

~~~
nl
They have money, but not the semiconductor expertise, assets or experience of
an Intel. See this explanation forvwhy that matters:
[https://news.ycombinator.com/item?id=7922277](https://news.ycombinator.com/item?id=7922277)

------
phkahler
It sounds like 7nm and 5nm are entirely feasible, but nobody is prepared to
spend $10Billion on the fab until they're sure they've got (close to) the best
solution.

Also, I feel like the whole industry is getting a bit lazy. The big foundries
are all around 28nm and getting 100 percent utilization of their capacity. So
long as their customers competitors have no better foundry to go to, there is
little incentive to press forward. There is the possibility to fall behind,
but waiting just means increasing certainty in the path forward. Meanwhile
Intel...

~~~
ebiester
Or, alternatively, the simple solutions have all been squeezed out and these
are genuinely hard problems.

------
rwmj
Question, probably going to come across as dumb, but I'll ask it anyway ...

Is there a market for "huge" feature chips (eg. 100nm+)? Would it be worth
making these older processes very cheap and widely available and growing the
market that way?

(You can still build an early Athlon at 130 nm or a 486 at 800 nm).

~~~
goteborg
The cost of making chips at 100nm+, 90nm, 45nm, etc, is almost indifferent. It
actually gets cheaper with smaller sizes because you can fit more chips in the
same number of waffles.

The problem described by the article is that 10nm is already so small that we
are hitting the barrier of what's physically possible.

~~~
rwmj
What (I think) I mean by the question is this:

Why can't I have a $1M machine sitting in the corner of my lab which turns out
800nm chips on demand? Can the technology which in 1990 was incredibly
demanding now be turned into a product, thus vastly opening up the chip
market, albeit for "obsolete" varieties of circuits.

~~~
CapitalistCartr
Making chips requires a large, expensive fab plant. Using it to make something
as obsolete as that would be more expensive than just making a couple
generations old chips; perhaps 30-50 nanometers. But no matter what it makes,
those plants are expensive still.

~~~
rwmj
Look at this ~6000nm Intel plant from the late 1970s. It was extremely high
tech at the time, but nothing there seems inherently like it cannot be fully
automated, simplified and shrunk down to my $1M FAB machine:

[http://www.youtube.com/watch?v=ll_-
_ngu4Gg](http://www.youtube.com/watch?v=ll_-_ngu4Gg)

(Starts about 7 mins in)

~~~
mechanical_fish
(I love this question. I have a Ph.D. in the making of semiconductor devices,
and I once worked as a troubleshooter in a factory that was making transistors
with a twenty-year-old process.)

The first fallacy that's tripping you up is marginal cost. Just because it's
cheaper to buy a 800nm-process chip today than it was in the 1990s doesn't
mean that it's cheaper to build the factory, employ the packaging engineers,
or source the materials (let alone stuff all those things into a refrigerator-
sized box). The finished parts are cheaper because the R&D, factories,
processes, and HR procedures were bought and paid for in the 1990s, and those
things are all still there, so long as a market is there. The workers are
_very_ happy to keep doing their jobs, and the marginal cost to keep them
working is relatively low, particularly because the yield on a mature process
can be really high.

The second fallacy is the physical-plant fallacy. You look at the factory and
the machines and you think that's what it takes to make semiconductors. But if
I gave you the keys to a shiny new Intel factory today, you would not succeed
in making 80486 processors in a few weeks. Even if I gave you a new factory
_and_ its staff _and_ the services of the world's leading experts in
semiconductor devices _and_ went back in time to arrange the delivery of a
steady stream of raw materials, you would _still_ not succeed in making
working 80486 processors in a few weeks, although the Dream Team might manage
to make some things that _looked_ like working devices right up until you
tried to turn them on... or until you tried to turn them on three weeks later.

The expensive part of manufacturing is the _learning curve_. Every one of
those shiny machines has five hundred knobs, and every one of those knobs
needs to be set correctly or the products won't work. Your experts can guess
the approximate settings for everything, but the crucial final 5% needs to be
dialed in by trial and error. You must _exercise_ the factory, then correct
for the mistakes.

That's expensive because the _feedback_ is expensive. The difference between a
broken part and a working part might take weeks to manifest, and it's
literally microscopic, so you need an entire little team of highly trained QA
scientists with thermal-cycling ovens and electron microscopes and Raman
spectroscopes and modeling software and coffee in order to develop
_hypotheses_ about the problems with your process, hypotheses which must be
tested by running more doomed wafers through that process.

(I've watched a few thousand people come within a hair of losing their jobs
because we couldn't make this iteration converge fast enough.)

This is where economy of scale comes from: Practice. The Nth wafer coming out
of a fab has high yield _if and only if_ the (N-1)st wafer had high yield, so
you have to bootstrap your yield up from zero one batch at a time. Your fab is
only as valuable as the number of wafers it has made, or tried to make. The
factory needs practice, and practice takes time, and time costs money.

\---

So, here's how your refrigerator-sized fab is going to work. You'll take
delivery and set it up. Unfortunately, shipping being what it is, parts will
have slipped or gotten bent or stretched. Your humidity and temperature cycles
will be different than they were back in Shenzhen. Your ambient dust level
will be different. The batch of photoresist that you pour into your hopper
will have been manufactured on a different week than the batch that the
manufacturer used to calibrate the machine, and your sputtering targets will
contain a different mixture of contaminants.

All of these things can probably be calibrated out – _if_ the knobs are well-
built enough to stay where you set them, _and_ your environmental controls are
comprehensive enough that the conditions remain constant, _and_ you aren't
forced to change suppliers, _and_ you have the operational discipline to
resist the urge to get blind drunk and start twiddling settings at random
while sobbing. But _how do you know which experiment to run_ , on your
microscopically-flawed parts, in order to converge on working parts? You need
to order the optional "electron microscope" kit, which ships in a slightly
smaller box. The box next to that one will contain the materials scientist
that you ordered. Hopefully they remembered to drill the air holes!

~~~
rwmj
That was a great answer, thanks.

~~~
mud_dauber
Agreed. Best answer I've seen on HN in ages. (25 years in semis here.)

------
tedsanders
This question faces a tremendous amount of uncertainty. So much rides on
whether the power/$ of EUV sources can be significantly boosted. No one knows
when (or even whether) this will happen.

It's not even known whether we will reach 10nm any time soon, as the article
asserts we will.

The industry's history has inculcated a huge amount of technological optimism,
but someday that optimism will be misplaced. There is a fairly high chance
that day is today.

~~~
joosters
A fairly high chance? Care to quantify that? Is it 50%? 10%? 1%?

~~~
tedsanders
Sure, I'd be happy try to quantify my views. What question would you like a
quantifiable answer to?

------
jostmey
Let's keep the size of the transistor in perspective. The human synapse is on
the order of 20 to 40nm in diameter. I have to wonder if we will be able to do
that much better than what Nature has already come up with.

~~~
w-ll
Nature really has produced some amazing things, but they are generally not
optimal, just good enough.

~~~
jostmey
True. And Natural Selection never discovered how to extract energy by breaking
apart the atom---that took an act of intelligence. Still, I suspect there may
exist limits to how small a fundamental computational unit can be:

[http://www.sciencedaily.com/releases/2002/11/021126203508.ht...](http://www.sciencedaily.com/releases/2002/11/021126203508.htm)

~~~
ars
[https://en.wikipedia.org/wiki/Natural_nuclear_reactor](https://en.wikipedia.org/wiki/Natural_nuclear_reactor)

~~~
nullc
Oh yea?! so where does there exist a natural fusion reactor?!? Man: 1 Nature:
0

(... :P I think the point was that there are— as far as we know, no evolved
organisms using fission. Not that there wasn't fission in nature prior to
man.)

~~~
cowardlydragon
I thought there was bacteria that used nuclear radiation to sustain
themselves. They aren't doing fission...

[http://www.sciencedaily.com/releases/2006/10/061019192814.ht...](http://www.sciencedaily.com/releases/2006/10/061019192814.htm)

~~~
wiml
Those are really just feeding on the reactive chemical species produced by the
ionizing radiation from nuclear decays in the rock. Pretty nifty, and I hadn't
heard of them before.

The other possible answer to this that I know of are the fungi who might be
able to derive metabolic energy from gamma rays--- though last I checked it
wasn't entirely certain that's what they're doing.

------
izzydata
If you doubled the number of transistors in the same space then wouldn't that
nearly double(quadruple) the heat output? How would someone even keep these
things at a reasonable temperature at 7nm?

~~~
coryfklein
No, because the heat generated by the transistors is a function of their size.

------
agumonkey
I cant wait for haswell-like processors to be so small they can be powered by
kinetic side-effects (walking, etc).

------
ufmace
Here's an article that disproves Betteridge's law of headlines

------
Tloewald
Tl;dr. Yes.

