
No Moore? A golden rule of microchips appears to be coming to an end - nopinsight
http://www.economist.com/news/21589080-golden-rule-microchips-appears-be-coming-end-no-moore
======
ChuckMcM
This has been pretty evident to systems folks since 2005. And the key here is
the insight that Moore's "Law" was really powered by the fact that the cost of
a wafer start[1] was constant. So double the transistors and you can make the
same chip for half the money, or as most people did, a chip that was 50%
"better" for 75% of the same money.

Now that the cost of wafers are going up, making a chip that is 50% better
costs 50% more money. That means that this years computer is more powerful
than last years computer, except it costs 15% more (chipset cost is often 1/3
the cost of parts for a machine) And if your current laptop is 'good enough'
in terms of power such that you aren't willing to pay 15% more. You don't buy
it. And _that_ is what is going to really be interesting here.

The new device costs more than the old device and isn't any more capable from
a feature perspective.

30 years of PC marketing "lore" goes "Poof!" Now the only way to make your
machine faster and cost less is to write more efficient software. Think about
that carefully. It will define success in this next decade.

[1] In the chip business a wafer start is the process of sending 1 or more
wafers through the process of being made into chips. It is the smallest unit
of manufacturing.

~~~
gizmo686
>Now the only way to make your machine faster and cost less is to write more
efficient software.

Were still a long way away from that. Switching over to SSDs represents a huge
speed boost, and given how young SSDs are, I suspect that there is still room
for improvement in their speed. Also, there is likely improvements to be made
in CPU architecture. We are still using x86, which has been improving
incrementally with backwards compatible changes since the 70s. I'm not
fammiliar with CPU architecture, but I suspect this means that there is room
for significant improvements in terms of per transistor efficiency with the
use of a novel CPU architecture. We could also see a drop in prices due to
economical, not technological forces. For example, the price of a CPU is
significantly higher than the cost to produce one. This is because you are
paying for the development of the CPU. Without continuing fundamental
improvements, we would expect to see the cost of CPUs fall to their marginal
cost as development would no longer be necessary.

~~~
bhouston
With the rise of iOS, Android, and ChromeOS which hide the CPU architecture
completely from the user (especially when running Java or JavaScript
applications), there are opportunities to change CPU architectures in more
radical ways in the future.

But then again, the gains possible may be more limited that you suggest
because the transition from x86 to x86-64 did involve a number of major
efficiency gains from changing register counts and how FP calculations where
done. There may not be huge efficiency gains left that are easy.

With LLVM basically being a new runtime in some cases, we can do more complex
compilation strategies such as was attempted with the Itanium architecture:
[https://en.wikipedia.org/wiki/Very_long_instruction_word](https://en.wikipedia.org/wiki/Very_long_instruction_word)

Who knows... I'm just brainstorming today. Happy New Years!

~~~
gizmo686
The transition from x86 to x86-64 did not seem like a radical redesign of the
architecture. The x86-64 bit chips are still capable of running code written
for x86 chips without modification (even the 16 bit code I believe).
Furthermore, the programming model (as exposed to assembly) did not seem to
change between the two architectures.

------
belluchan
This Moore's Law coming to an end meme reminds me of Voyager 1 leaving the
solar system. You could just replace the text in the xkcd and it would be just
as funny: [http://xkcd.com/1189/](http://xkcd.com/1189/)

~~~
kken
This is the only valid response! Classical scaling (aka Moore's law) ended
with the 130nm node more than 10 years ago. The business model has been slowly
changing since then.

~~~
amock
Can you expand on this? What happened after 130nm that was different than the
previous scaling?

~~~
kken
Companies started to introduce new technological features starting with the
90nm node. This slide is a nice top level summary:

[http://www.brightsideofnews.com/Data/2011_5_6/Intel-
Manufact...](http://www.brightsideofnews.com/Data/2011_5_6/Intel-
Manufacturing-Event-in-Munich-22nm-Breakthrough-
Analyzed/Intel_Transistor_Leadership.jpg)

------
heydenberk
This piece from 2011 is still, to me, the definitive discussion of the end of
Moore's "law" as we know it: [http://herbsutter.com/welcome-to-the-
jungle/](http://herbsutter.com/welcome-to-the-jungle/)

~~~
ghaff
Nice piece--and, I suppose, ultimately hopeful in the sense that it suggests
there are a variety of paths to increased throughput even if we're close to
losing Moore's Law (at least as it comes largely from CMOS scaling). Ones does
wonder though what the ultimate effect of losing the CMOS scaling lever will
be on IT economics. Certainly, not all computing innovation has been about
continued CMOS process shrinks. But those have been the foundation for an
awful lot.

------
rubiquity
I feel like the biggest killer of Moore's Law is how much emphasis has been
put on lowering/maintaining power consumption in recent years, similar to
airplane performance/cost in the 1970s.

~~~
rm445
Should that matter for Moore's Law as it was defined, for number of
transistors on a die? I thought that smaller transistors required less current
to operate, so the incentives for lowering power consumption and upping
transistor count are aligned.

~~~
wmf
Dennard's Laws (which say that smaller transistors use less power) are also
ending.

~~~
kken
This is actually the same as Moores law. Dennard was the one to define scaling
law in respect to the physical properties of the transistors.

------
habosa
Nothing against the article because it doesn't falsify any of the conclusions,
but Moore's Law is often mis-stated. Moore said that we could double the
number of components in an integrated circuit every 18-24 months, nothing
more. It had nothing to do with cost or power, although he did dismiss the
thermal energy argument in a sentence or two in the original paper.

The interesting issue has to do with Dennard's Law for power scaling, which
said that power density would remain constant as we increased component
density. This isn't true any more and that's part of the reason why multicore
designs are the future (2 processors at 500MHz use less than one at 1GHz for a
fully parallel workload).

~~~
ghaff
It did have to do with cost as well. From the original paper: "That means by
1975, the number of components per integrated circuit for minimum cost will be
65,000. I believe that such a large circuit can be built on a single wafer."

In other words, it was the number of transistors that could _economically_ be
included in a single unit.

------
bsamuels
Required reading for anyone who didn't see the paper last week:

The Slow Winter by James Mickens
[https://www.usenix.org/system/files/1309_14-17_mickens.pdf](https://www.usenix.org/system/files/1309_14-17_mickens.pdf)

~~~
walshemj
Some day they are going to have to get this guy (James Mickens) on Dara O
Briains science club show and/or the infinite monkey cage radio show.

------
lifeisstillgood
I think it will come back to needing new and smarter software. Perhaps more
efficient software is a better word. Magnetic disk Storage is now at silly
levels, and will not hit the theoretical maximum of 20TB but even so
extracting all (randomly fragmented) data off a 4tb disk will take days to
weeks. So we shall head for a time not unlike the 60/70s when you arranged
storage on your tape drive based on the speed of the tape passing under the
head.

whatever architectures we come up with I think humanity will see increasing
growth in computing power (perhaps not as insanely fast but fast) - but to use
that effectively will no longer be a free ride for the developers

one last thing - More or Less podcast quoted this stat: a one billion FlOP/s
chip today costs 19cents. In 1961 it did not exist but had we tried to build
one machine to perform one billion operations a second it would have cost 1.1
trillion - or about the entire world GDP.

just puts Moore's law in perspective for me

------
jaunkst
Its on life support, and pivoting to newer technologies. The turning point is
towards integrated optical and quantum effects. Even switching from Silicon to
Gallium Arsenide. Manufacturing processes are changing and there is no need to
create anything smaller than the 10nm.

~~~
iyulaev
Why not Gallium Nitride?

------
GauntletWizard
I was quite disappointed to discover this recently. I went out and bought a
new machine; I looked at all the specs, bought a sensible machine, put it
together. It ran horribly. I looked everywhere... and finally checked Tom's
Hardware. My processor, which had cost me $300 at the beginning of 2012,
outperformed the one that I'd just bought for $200. This was... pretty much
unthinkable to me. Every other two-year gap in my life was incredible; Even
cheap machines were way better than their counterparts. Now I've wasted a
bunch of money, a bunch of time, and gotten a worse machine.

------
ksec
Yes but there are a few problem that haven;t quote mapped out. While what it
said there are true, it is build on the fact that previous node and growth
were driven by PC market. Now that maths dont add up well because the market
is shirking or not growing fast enough to keep up with the pace of node
invention and wafer cost. The more wafer used, i.e the larger scale it is
produced the lower the cost per transistor. With Tablet, wearable computing
and Mobile Phones, these blooming market could drive the cost economics of
Moore at least to 10nm or 7nm.

------
rbanffy
I've been hearing about the end of Moore's law ever since I was in college
(late 80's). There are indeed some hard limits for Silicon, but there is still
a lot of space for other materials, major improvements in interconnects and
better software.

~~~
salient
Don't confuse the end of Moore's Law with the end of computer technology
progress. Moore's Law is actually a pretty specific law. It says the amount of
transistors in the same amount of space doubles every 2 years. We're reaching
the "end of Moore's Law", because we're getting to the point where we're close
to making a transistor the size of an atom, which I think is pretty much the
end of "doubling the number of transistors in the same amount of space every 2
years".

That doesn't mean that after we reach this limit we won't be making quantum
computers, or use other materials like graphene transistors that can probably
reach TerraHertz clock speeds, which may even improve at a rate of 2x every 2
years, too. But it wouldn't be "Moore's Law".

We're also working on "brain-like" computers, but I think these and quantum
computers will become "mainstream", in a general-purpose way (not just for
very specific tasks), in a few decades, which should be quite a while after
Moore's Law dies (in the 2020's).

So the most likely thing is that we'll use other kind of materials for the
same type of "classical computers" for the next few decades, even after the
end of Moore's Law, but they will improve through increasing their clock speed
or through other ways, rather than getting that extra performance from adding
more transistors.

~~~
rbanffy
Originally, it stated the number of transistors _per component_ will double
every year. If the defect density (or susceptibility to defects) goes down,
it's possible to manufacture larger, more complex, components for the same
price. This sort-of applies for stacked dies, with the denser interconnects
and added bandwidth between layers adding to effectively higher density than
single-chip (but that would be kind of cheating on Moore's Law).

A looser formulation talks about computing power. In that, we have been ahead
of Moore's law with GPGPUs and vector units for quite some time. This
formulation is more detached from physical process details and looks more like
what the market can absorb.

In the end, what the market can absorb may prove the ultimate limit.

------
shawnee_
While it's true that we're approaching the deceleration of shrinking chip
size, there are still plenty of ways micro and nano computing will be
optimized. Some of the more interesting developments are happening in the
world of graphene R&D. This article is a little dated for such a fast-moving
industry, but it's semi-relevant:

[http://graphenewire.blogspot.com/2012/10/extending-memory-
be...](http://graphenewire.blogspot.com/2012/10/extending-memory-beyond-
moores-law.html)

~~~
kken
Graphene is interesting, but nobody in the industry believes that it will be
the holy grail:

[http://www.techdesignforums.com/blog/2013/12/10/graphene-
get...](http://www.techdesignforums.com/blog/2013/12/10/graphene-gets-reality-
check/)

(This is pretty different from what academia tells you...)

~~~
erikpukinskis
The critique you link to comes from an academic.

~~~
kken
This does not mean that he is not able to present the things in proper
context. However, the general situation with graphene in acedemia right now is
that it is a good source of grant money. Hence you see wildy exaggerated
claims of its capabilities.

------
wikiburner
3D processors

memristors

improved parallel processing

graphene

quantum

spintronics

optical

Did I miss any?

~~~
kken
Sure, you missed all the real stuff that is going on:

\- II-VI channel materials on silicon

\- Finfet, Nanowire FETS (the top down variant, not the academic bottom up
version.)

\- RRAM (Memristors are a sub-class of these devices. Memristors mainly exist
in the PR department of HP)

\- TSV integration

\- 3D Flash

\- Memory integrated processers (see Microns Automata)

~~~
bhouston
It would be cool to write an article explaining this stuff and its expected
impact in terms that most software developers could understand. I love reading
this stuff but putting it into context would be useful. Your response suggests
you know what you are talking about but it is fairly incomprehensible to most
everyone else not deep into semiconductor technologies without just looking up
each term in Wikipedia.

~~~
kken
IEEE spectrum has a good and accurate general coverage of these topics.

[http://spectrum.ieee.org/semiconductors](http://spectrum.ieee.org/semiconductors)

------
tunesmith
Not being an EECS person and having only taken a couple of Intro to Circuits
survey courses, can someone fill in my ignorance?

EECS people make math convenient for themselves by pretending that voltage
numbers and current numbers cancel out in certain ways. Otherwise they'd have
to do really messy calculus. The problem is that this convenience is only true
for circuits that are up to a certain speed. Past that speed, that way of
calculating and engineering circuits starts to break down. We've hit those
speed limits.

The field has chosen to react by instead switching to multi-core computers.
The problem from a programmer perspective is that parallelism is difficult to
write for. The reason it is difficult to write for is because of mutable
variables, which are common in imperative languages. So that's driving the
popularity of functional languages. However, not as many people are good at
functional languages, since they tend to be more mature in academic circles,
less mature for industry purposes, and generally more difficult to learn for
people that are more used to procedural thinking than mathematical thinking.

So, if all those premises and implications are true that would mean that maybe
if the EECS people stopped pretending that EECS circuits are simple and
started doing the messy calculus, maybe we could start shrinking single-core
chips again. But I'm sure I've messed up some of those premises. Anyone?

~~~
gizmo686
I'm also not an EECS person, so I do not know if we are here yet, but at some
point you will have to deal with quantum effects.

~~~
zhemao
We already do have to worry about quantum effects. Electron tunneling is a
real concern and one of the causes of chip degradation over operating
lifetime. Essentially, an excited electron can jump across the transistor gate
oxide and get trapped in the gate metal. If this happens enough, the
threshhold voltage of the transistor increases, making it slower.

------
jeremiep
I thought Moore's law had been replaced by Amdahl's law a decade ago. Moore's
law is only about the number of transistors in a chip rather than the
theoretical speed improvement limits of multicore chips.

------
freshhawk
What's the law for how many stories there are per month about this obvious-
for-close-to-ten-years-and-was-only-ever-a-marketing-meme-anyway "fact"?

------
mwcampbell
I hope this happens soon. The ability to run current software on a 10-year-old
computer (regardless of form factor) would be a great equalizer.

------
CSDude
We were aware of this in 2005 when the Pentium 4 could not transform into
another Pentium 5.

------
geuis
Please don't link to articles behind paywalls.

------
robg
Or a pause.

------
dadagaaa
Water has been found on Mars, again?

------
wbsun
Okay, seriously, isn't it only cared by non-technicians?

