
Moore’s law may be running out of steam, but chip costs will continue to fall - svepuri
http://www.economist.com/news/technology-quarterly/21662644-chipmaking-moores-law-may-be-running-out-steam-chip-costs-will-continue
======
Expez
Moore's law is more of an implementation detail than anything else. Nobody
really cares about transistors. What we all care about is computational
density, computational efficiency, and exponential progress. Thus the
interesting metrics are something like flops/m^3 and flops/Watt. The cool
thing about Moore's law is that by just shrinking transistors we would get
both smaller and more energy efficient chips.

I think that economical forces will continue to drive progress, and probably
exponential progress like we've enjoyed in the past, for at least another
decade. Probably several.

The way this is going to happen is through a paradigm shift. To the layman
this seems weird, but to anyone in the business this should be expected. We've
already been through quite a few. The first computing devices were mechanical,
then they were based on electrical relays, then came the vacuum tubes and
finally we entered the era of the transistor. We can argue about what the next
paradigm will be, but I have no doubt there will be one. And soon.

~~~
pjc50
This is all very "something will come up" without any kind of technological
foundation. It's not a simple process of money in, changes to the laws of
physics out. This is why we've seen only small incremental improvements to
batteries over the years rather than doubling every 18 months. VLSI has been
very lucky that a single process parameter, lambda, can be incrementally
refined to give such huge benefits.

The MHz race has already slowed dramatically. There's now a push to software
developers to take advantage of multicore, which is advancing only very
slowly. Remember that many HN readers write single-threaded (but maybe
asynchronous) Javascript.

~~~
Joeri
Maybe we'll see a paradigm shift to functional programming, to take advantage
of all those cores. Or maybe we'll see the adoption of auto-parallellizing
languages and frameworks. A paradigm shift can happen in software as well as
hardware. We could also gain a few orders of magnitude in real world
performance of many use cases by having bulk storage with the performance
characteristics of RAM.

~~~
mpweiher
The "FP for multi-core" trope is a red-herring, at least so far. Yes,
immutability makes parallelizing easier. It also makes things in general so
much more expensive that we're still net-negative by a large margin.

For example, Simon Peyton Jones gave a talk[1] about Data Parallel Haskell,
which after many years of development was still slower on 6 cores than C on a
single core.

[1]
[https://www.youtube.com/watch?v=NWSZ4c9yqW8](https://www.youtube.com/watch?v=NWSZ4c9yqW8)

~~~
Symmetry
I think the state of the art for Haskell has improved a lot in terms of
scientific computing since 2010?

[http://research.microsoft.com/en-
us/um/people/simonpj/papers...](http://research.microsoft.com/en-
us/um/people/simonpj/papers/ndp/haskell-beats-C.pdf)

~~~
mpweiher
Glad to hear that they've improved enough to look good on their own benchmarks
:-)

------
ilzmastr
How will this affect the programmers reading HN today in the future?

For programming where high performance really matters (throughput not
latency), adoption for parallel frameworks/languages like CUDA has been a no
brainer, and people have embraced the new parallel programming model.

Since major speedups in consumer chips today are about adding parallelism
(Intel AVX), versus the clock speedup of a decade ago and earlier...

What does HN think about mainstream effects? Will there be a noticeable
migration to less computationally wasteful or less sequential
languages/frameworks when mainstream applications increase their hunger for
compute resources?

p.s. (I'm assuming that the average application's future equivalent/spawn
needs more and more computational resources, either bc of more load from
customers using it, or because it gets more complex and ambitious as the space
of applications gets more complex in the future)

~~~
pjc50
The immediate future is probably:

\- frameworks continue to waste resources in order to reduce perceived time-
to-market from adopting that framework.

\- speech recognition has already shifted to the "cloud". Other more
computationally intensive apps will also be run in the cloud, resisting piracy
and enabling surveillance.

\- developers will add more virtualisation layers.

\- nobody will solve the IoT security or interop problems, but nonetheless the
number of tiny not very powerful processors with radios will increase. ARM
will ship their trillionth core.

------
Sanddancer
This article is not hitting on one of the directions we can go to continue
Moore's law. Namely, upwards. Intel demonstrated a 3d pentium 4 back in 2004,
the High Bandwidth Memory packages on AMD's latest chips are 3d, nVidia's
ramping up their own HBM technology for their next GPUs, and there are a ton
of active research programs on how to keep yields high as dies are stacked. It
may be the end of Moore's law on a flat surface, but there's still quite a few
places to go.

~~~
hga
I'm not sure that gains you much, but we'll see.

Different is the move to large to potentially huge numbers of layers in making
NAND flash. Samsung is shooting for 100 layers and 1 terabit on a single die
(!).

~~~
mud_dauber
3D stacking doesn't answer, strictly speaking, the issues with Moore's Law
limits. It instead addresses the very real problem of getting data to/from the
processor to external memory. Pins are expensive and the signal speeds aren't
keeping up with processor speeds.

------
PedroBatista
"but chip costs will continue to fall "

Apparently Intel didn't get the memo...

~~~
Omin
costs are falling, prices aren't

~~~
hga
Costs are falling quite a bit less than normal. The tick from 32 nm Sandy
Bridge to 22 nm Ivy Bridge went smoothly, as far as we can tell, but Non-
Recurring Engineering (NRE) costs have been going up, for example more and
more masks are required for each new node, starting at 32 nm:
[https://en.wikipedia.org/wiki/Multiple_patterning](https://en.wikipedia.org/wiki/Multiple_patterning)

The next tick from Haswell at 22 nm to Broadwell at 14 nm did not go well.
Seriously delayed due to low yields, per Wikipedia they're not doing low end
desktops with it (perhaps due to NRE? Then again, these ought to be server
chips with features cut out), and the Skylake microarchitecture on 14 nm has
started coming out relatively quickly, cutting the normal lifetime Haswell
should have had. The Cannonlake shrink to 10 nm has been delayed at least 2
years, with a Kaby Lake mid-life kicker at 14 nm breaking the tick-tock system
they've used since 2007.

No doubt this is paying off in reduced power for the computrons for mobile and
server systems, but costs, that's not doing hardly so well, and apparently
breaking Moore's Law. Intel is supposed to be the best at this, and foundries
like TMSC have also been having problems at various points as they shrink.

