
Ask HN: What comes after we hit the 16nm barrier? - smanek
For the last fifty years, we have lived in the world of Moore's Law - with truly astounding results.<p>However, within the next decade we will be approaching fundamental physical limits of the transistor (e.g., http://en.wikipedia.org/wiki/16_nanometer).<p>What do you think comes next? Will hardware plateau while software catches up? How will this affect the computing future we've been promised (atom-level simulations of complex systems, the singularity, etc.)?
======
mnemonicsloth
Moore's Law isn't going to die. It's going to be extended, (or supplanted) by
further developments, like Einstein's gravitation extended Newton's.

There are many, _many_ successor technologies to CMOS out there: nanotubes,
nanowires, magnetic switching, molecular switching, photonic switching. I'm
sure more have been proposed in the five years since I stopped paying
attention.

The situation for semiconductor researchers and manufacturers is a lot like
the one Edison faced with the light bulb. We know what we want, and there's a
huge field of options open in front of us. Now we just have to spend some time
on basic research to figure out what scales and what doesn't. This might take
some time, but a delay will only make the next doubling more lucrative when it
becomes feasible.

And if you're comfortable with (only slightly) more far-out ideas, the wall is
being written right now:

[http://metamodern.com/2009/05/22/a-third-revolution-in-
dna-n...](http://metamodern.com/2009/05/22/a-third-revolution-in-dna-
nanotechnology/)

It so happens that this development has been anticipated here:

<http://e-drexler.com/d/07/00/1204TechnologyRoadmap.html>

------
Confusion
I think two obvious ways forward are

* Increasing the die size (feasable by reducing substrate costs)

* Stacking transistors (3D CPU's)

~~~
albertcardona
Hopefully they'll (re)invent blood vessels along the way--a.k.a. an effective
heat removal system like our brains have.

------
torr
We've been living with Moore's law so long that young people seem to think
it's similar to an actual law like gravity.

Fact is, computing has benefitted from some core technologies (ex.
lithography) that have lended themselves to order-of-magnitude increases in
things like number of transistors per square cm. It's been a happy
coincidence, but, inevitably, that ship has about finished its trip and is
heading back to the dock.

Here's what will happen (and is happening) when hardware hits that limit:
pretty much nothing. People will still use computers for email, browsing the
web, watching videos, numerically solving partial differental equations, etc.
There will be no clamour to make faster computers because computers are pretty
much fast enough for what we want them to do. If they're not fast enough, then
tasks can be spread out and handled in a distributed manner.

Oh, also, computer hardware companies will find that they can't get you to
upgrade every few years to get the latest and greatest. This is great news,
btw, for, say, schools; since they'll be able to invest in computers and
expect to keep them long enough to be worth the investment.

------
kristianp
According to wikipedia, 16nm will be succeeded by 11 nanometer.
<http://en.wikipedia.org/wiki/11_nanometer>

------
fauigerzigerk
Massive parallelism. Of course we need parallel algorithms as well and deal
with all the concurrency issues. I think there's one or two paradigm shifts in
the cards as we transition towards this massive parallelism.

~~~
verdverm
have you heard of amdahl's law (en.wikipedia.org/wiki/Amdahl's_law) before?
The push towards parallelism and multicore is a ploy for processor sales. Most
applications can't benefit from a parallelism on more than 4-8 cores. There
are applications such as computer graphics and servers that benefit, but your
average user only uses one core right now. The problem that processors are
facing is that clock speed can't be increased. The heat can't be dissipated
fast enough and they would melt if run faster.

~~~
randallsquared
_Most applications can't benefit from a parallelism on more than 4-8 cores._

But since the vast majority of people do not run one or two applications per
computer, that doesn't matter that much. I'll go out on a limb and predict
that the day is coming when no one ever closes an application. Actually, my
parents are already there on their Mac Mini -- they only close windows, never
applications, until a software update requires a reboot.

------
kenver
It's already difficult for hardware designers to keep up with the underlying
technology. It's incredibly difficult to manage the complexity of design,
hence the massive growth in EDA tools over the last 10 years.

The productivity gap between the underlying hardware and the capabilities of
the designers is probably a more immediate threat than reaching the
fundamental physical limits.

------
Hexstream
Personally I don't care that much. Current processors are plenty fast as it
is. I mean, it's always nice to have faster ones and I don't think there will
ever be a point where everyone is satisfied and we stop improving them, but
I'm just not CRAVING for more speed so much.

However, what I'd LOVE to have is ubiquitous availability of very fast
persistent storage. It's really a pain to deal with the relative slowness of
hard disks.

Something like memristors would be a solution.
<http://en.wikipedia.org/wiki/Memristor>

------
psyklic
I think this question has already been answered -- Moore's Law has been out of
play (for processor speed) for some time now. Microprocessor companies are now
banking on massive parallelism.

One of the most promising new technologies (imo) relies on probabilistic
transistors -- which occurs as they get smaller and more current leakage
occurs. A perfect example is video decompression, which does not have to be
exact. You can get drastic power reduction by doing it probabilistically (for
some applications), which is great for mobile devices.

~~~
icey
Moore's law doesn't have to do with processor speed. It has to do with the
number of transistors that can be placed on a chip.

From <http://en.wikipedia.org/wiki/Moore%27s_law>

_Moore's law describes a long-term trend in the history of computing hardware.
Since the invention of the integrated circuit in 1958, the number of
transistors that can be placed inexpensively on an integrated circuit has
increased exponentially, doubling approximately every two years._

Speed gains have been a side effect of this.

~~~
psyklic
Yes, I was referring to the fact that Moore's law is still applicable in terms
of computer memory, but not microprocessors ... In any regard, I listened to
talks four years ago that convincingly stated that it was no longer
applicable, so it seems that we've already seen what happens once Moore's Law
is no longer accurate.

------
sown
<http://en.wikipedia.org/wiki/11_nanometer>

It goes to 11.

My personal belief is that non-semiconductor technologies will come around, at
least for specialized tasks. I imagine I'll be using similar technologies we
have now for some time longer when it comes to day-to-day operations.

------
tynman
We software developers will have to actually start paying attention to code
bloat and performance. We won't be able to count on beefier hardware to run
our next round of sloppy hacks!

------
physcab
Optical computing would be pretty cool.

~~~
frogking
Wouldn't pure functional programming fit _that_ specific bill perfectly?

