
The Collapse of Moore's Law: Physicist Says It's Already Happening - narad
http://techland.time.com/2012/05/01/the-collapse-of-moores-law-physicist-says-its-already-happening/
======
DanBC
People have been predicting the end of Moore's[1] law for years - since at
least mid 90s.

Luckily today most people are at the point where they just don't need more
power. Most people need enough to run a web browser and do a bit of word
processing. They might need enough to open a spreadsheet or a presentation.

Most people would be happy with something like a good tablet and a monitor /
keyboard dock.

It would be nice if the churn of "more power" could switch focus to "power
efficient" or "better optimisations" or "better architecture" or even "future
technology with advanced architectures".

Sure, there are people who need media streaming; or gaming; or compiling; or
multicore crunching; or rendering farms; or etc. They'll always have machines.
I guess parallelisation needs to improve a bit and clustering needs to get
better.

[1] some of these people were using less formal definition of "double the
'power' / 'speed'" rather than more formal "double the number of transistors
per IC".

~~~
gaius
There was enough power on the desktop for commercial-grade word processing and
spreadsheets, at an affordable price, 30 years ago. Ignoring all the bells and
whistles, there's actually not much you can do on a 2Ghz quad-core PC with 8G
of RAM that you couldn't do on a 2Mhz 6502 machine with 32k or 64k of RAM.

The failure of the industry is, we have thousands of times more power - and we
can't think of anything to do with it. And the keyboard of a Beeb or a C64 was
much nicer than most modern keyboards too...

~~~
lifeformed
The answer is video games.

------
wpietri
Two things that really interest me about this:

For a couple of decades we've been trading computational resources for
programmer convenience. That worked very well when cores were rocketing up in
speed year on year. But since we can't count on that, I think we'll be taking
a harder look at our tools.

But I think we're going to have to take an even harder look at how we work. As
processors stop getting faster, software won't become obsolete as quickly.
We've all left a lot of mistakes behind in code killed in platform shifts. But
now there may be no escape. I don't know what will come of that, but I'm sure
interested to see.

------
tokenizer
I subscribe to Kurzweil's camp in that I think the halting of Moore's law will
pressure a paradigm shift in computing, like the electromechanical, relay-
based, vacuum tube, transistor paradigms, the integrated circuit paradigm will
cease to be relevant when something like memristors or three-dimensional
molecular computing comes into play.

~~~
anamax
> the integrated circuit paradigm will cease to be relevant when something
> like memristors or three-dimensional molecular computing comes into play.

What part of "integrated circuit paradigm" doesn't apply to memristors or
three-dimensional molecular computing?

I ask because HP's Williams seems to say that memristors are firmly in the
"integrated circuit paradigm", but maybe he means something different than you
do.

~~~
spot
Moore's law is very much rooted in photolithography which is a 2D process. And
memristors do fit in this. So does the 3d computing that intel is doing now,
actually, since it still has a flat product.

but if ever there were truly solid 3d computers then they might scale on a
different curve, presumably sharper due to the cubic.

~~~
Retric
photolithography is already limited by heat density's. Going 3D sounds great
but it's far less helpful than you might hope and clock speeds are dominated
more by how long it takes a transistor to switch than how long it takes
electricity to travel though those tiny wires.

------
DennisP
"Using standard silicon"...in other news, Samsung just had a breakthrough
making logic circuits in graphene, and hopes to commercialize 100x faster
chips by 2020:
[http://www.asiaone.com/News/Latest%2BNews/Science%2Band%2BTe...](http://www.asiaone.com/News/Latest%2BNews/Science%2Band%2BTech/Story/A1Story20120519-346919.html)

And it looks like memristors will hit the market around 2015, giving us much
faster storage and nonvolatile RAM. According to HP, "We put the non-volatile
memory right on top of the processor chip, and, because you’re not shipping
data off-chip, that means we get the equivalent of 20 years of Moore’s Law
performance improvement"
[http://www.electronicsweekly.com/Articles/22/05/2012/53718/u...](http://www.electronicsweekly.com/Articles/22/05/2012/53718/ucl-
makes-memristors-manufacturable.htm)

Down the road a bit further, memristors could be used for neural-network
coprocessors, since they function a lot like synapses.

The smooth progression of Moore's Law will probably get more jumpy, but the
long-term trend goes back to mechanical adding machines.

~~~
wmf
Playing devil's advocate, if Moore's Law ends in, say, 2015 then Samsung and
other chip makers may not even be in business in 2020.

~~~
reitzensteinm
I think it would take a lot longer than 5 years, even if node shrinking
stopped dead.

You can always add more functionality to SoCs (eg better video decoding),
tweak current cores (Intel gets more performance out of processor redesigns
than node shrinks), and optimize your current node (power usage typically
drops, and yield increases with time on the same node).

You would also get way more focus on GPGPU computing, with initiates like
Intel's QuickSync. Lots of performance/watt to gain there, if there's no
alternative.

Plus people making the hardware and software of phones and laptops need to
sell units too, so they'll pick up some of the slack. Not every iPhone is a
significant upgrade in terms of performance.

And lastly, a whole bunch of chips are being bought for reasons other than the
old one is too slow; people are getting multiple computers, more gadgets, and
in developing countries people are getting their first computers. Devices are
becoming disposable. Laptops die and get replaced, even though even a Pentium
M stacks up relatively well these days.

20+ years, though, you may have a point. I have no idea what that would look
like. Though with depreciation on fabs slowing way down, even if sales were
lower, the margins would look pretty damn good with today's prices. Plus each
node shrink wouldn't be a potential disaster; the fab companies still around
may even be more financially stable.

------
dsr_
Since it isn't a physical law, but an observation of average rate of advance
of technology, saying that it is 'collapsing' is pretty silly.

Then the article undermines itself by talking about 3d transistors and
molecular valves and other possibly ways forward.

In the end, computational density is likely to be limited by heat transfer
rates. Having your CPU melt itself is rarely desirable.

~~~
DennisP
If someone figures out how to make reversible computing practical, heat won't
be an issue either.

------
bryanlarsen
Moore's law has been redefined several times already. At the beginning it was
every 12 months, then 18, and now it's 24. What's being measured has also
changed. It used to be frequency scaling that we measured, but that changed
after the debacle of the Pentium 4.

But the essence of Moore's law has always remained: the exponential growth in
the capability of ICs. The other relatively constant aspect of Moore's law is
that people have always predicted it to last another 10 years. When Moore
first formulated the law, that's approximately what he believed. So when
somebody says that they expect Moore's law to end within 5-10 years, I take
that with a big grain of salt.

What will kill Moore's law will be an unwillingness to continue to spend more
money to build semiconductor plants. It used to cost only a few hundred
thousand dollars to build an IC plant. This cost has increased exponentially
to the point where a plant today costs $10 billion dollars. It's reasonable to
suppose that a consortium of semiconductor manufacturers could invest $100
billion in a plant, but on current trajectories, a $1 trillion plant may be
necessary to continue Moore's law in 10 years. Do you think that will happen?

~~~
carsongross
Yes.

The cost of the exotic silicon and post silicon alternatives are dramatically
higher than todays costs, which are dramatically higher than yesterdays.
Couple that with the fact that the majority of people have far more computing
power than is necessary for their day-to-day existence (e.g. emailing, posting
on FB and looking at cat pictures and/or pornography.) So costs continue to
increase by some super-linear function but benefits follow a sigmoid curve.

The disconnect between marginal cost and marginal demand for computational
power is what's going to knock Moore's Law off it's curve IMO, not some hard
technical barrier. The futurists are always spending someone else's imaginary
money, so they don't have to worry about stuff like that.

~~~
ericb
> The futurists are always spending someone else's imaginary money.

A $35 raspberry pi can outrun what used to be multi-million dollar mainframes.
Are you sure that cost factors are relevant when the trend is so strongely
toward lower costs? If prices drop, then lower utility uses become feasible.

~~~
carsongross
Past returns do not indicate future performance. Technical innovation is
slowing down, not speeding up. Raw, usable linear CPU performance at the
fingertips of normal humans using personal computers has been essentially flat
for half a decade, with the majority of material perf changes coming in the
I/O subsystem, and yet still the CPU sits idle for the vast majority of its
life.

I'm probably wrong, but it's still a reasonable position: we've seen technical
stagnations before. I'm simply (and probably incorrectly) projecting the
_current_ trend I see, as an end user of technology: raw straigh-ahead cpu
perf is stagnating and, unless I see real progress on AI (which I don't) I'm
skeptical that a lot more of it will help most people doing most things,
unless we are talking a _lot_ lot more.

If some of the more exotic technologies get practical and cheap, I'm happy to
be wrong, but I think it's important to note that Moore's Law was
observational: he was describing what he saw happening rather than envisioning
it. We no longer are seeing what he saw. A phase change may fix that, but
that's very different than what we've been mining for the last half century,
and much harder to project out.

------
karolist
I don't know but I've always found it odd to be called a "law", that gives too
much scientific credit for something that was clearly derived by simple
observation and guesswork and has no underlying science to prove it except
that the statement held for some time.

~~~
panacea
They acknowlege in the article it isn't a real scientific law.

~~~
okamiueru
In danger of being a bit too pedantic, there is no such thing as a 'real
scientific law' either. Though I admit there is general understanding of what
is implied by 'a scientific law', and I assume that's what you and the article
is referring to.

------
wtvanhest
2 years ago I worked at Intel and they talked about Moore's law and how people
always said it would end, but it never does due to continued innovation. I
think everyone knows they have a lot of people working on a lot of possible
solutions to these physics problems. Intel is like a supercharged academia
where hitting objectives matters. They don’t want engineers/physicists that
say it cannot be done, those people are stuck on the outside writing posts
like this one. Those posts come up continually, (do a search for “end of
moore’s law 200X) and you will see articles written every year.

Here is a recent quote from an interview with Mark Bohr, Intel's senior fellow
and director of process architecture and integration.

“"The end of Moore's law has always been 10 years away, and it will always be
10 years away," he said. He's been hearing predictions about the end of
scaling since he joined the industry 30 years ago, so he isn't worried. He
said 14nm is in full development and on track for manufacturing readiness in
the second half of next year.”

[http://forwardthinking.pcmag.com/show-
reports/297801-intel-t...](http://forwardthinking.pcmag.com/show-
reports/297801-intel-the-end-of-moore-s-law-is-still-10-years-away)

~~~
Retric
For a while transistor density's doubled every 12 months. That stopped
happening, then for a while they doubled every 18 months until that stopped
happening. Now, if your willing to accept transistors doubling every 50,000
years as the same exponential growth as doubling every 12 months then sure it
will continue for a long time. But, that's not what 'Moore's law' as
originally stated.

~~~
wtvanhest
Since 1975 it has been every 24 months. I have never heard the 12 month number
until reading it on HN today.

Someone at Intel at some point said 18 months, but it was most likely a 1 off
conversation/presentation rather than a restatement of the law and occured
well after 1975.

~~~
Retric
12 months is what he actually said in 1965.

 _The law is named after Intel co-founder Gordon E. Moore, who described the
trend in his 1965 paper.[2][3][4] The paper noted that the number of
components in integrated circuits had doubled every year from the invention of
the integrated circuit in 1958 until 1965 and predicted that the trend would
continue "for at least ten years".[5]_

<http://en.wikipedia.org/wiki/Moores_law>

~~~
wtvanhest
I wish people would stop citing wiki on HN, especially without reading the
entire page. On the exact same wiki page under history it says:

 _Moore slightly altered the formulation of the law over time, in retrospect
bolstering the perceived accuracy of his law.[17] Most notably, in 1975, Moore
altered his projection to a doubling every two years.[18][19] Despite popular
misconception, he is adamant that he did not predict a doubling "every 18
months." However, David House, an Intel colleague, had factored in the
increasing performance of transistors to conclude that integrated circuits
would double in performance every 18 months.[note 2]_

------
ktosiek
Moore's Law was about the amount of transistors you can put in a chip cheaply,
so it may be coming to an end. But I don't think it will stop machines from
getting faster - we are seeing lots of progress in multi-core usage even on
desktop and in mobile computing (multicore smartphones), and using dedicated
hardware outside the CPU socket/package (GPGPU) is getting normal too - those
look like new ways of making faster personal machines.

~~~
toemetoch
On the x86 front I'm a bit pessimistic. In 2006, Intel promised 80-cores by
2012 but I haven't seen them anywhere yet [0].

[0] <http://news.cnet.com/2100-1006_3-6119618.html>

(Disclaimer: don't know how the crowd here reacts to CNET articles)

~~~
gaius
Well, there was this:
<http://www.theregister.co.uk/2009/05/15/larrabee_32_cores/>

~~~
toemetoch
It was cancelled in 2010.

<http://en.wikipedia.org/wiki/Larrabee_(microarchitecture)>

~~~
gaius
Ah but
[http://en.wikipedia.org/wiki/Knights_Corner_(Intel)#Knights_...](http://en.wikipedia.org/wiki/Knights_Corner_\(Intel\)#Knights_Corner)

~~~
toemetoch
Well played, sir.

------
anusinha
Moore's Law in Silicon will eventually run out. This is true and is physically
provable. And there are already a lot of smart minds (at Intel and at a few
other companies and at many universities) developing post-silicon
technologies. It is possible that we will one day shrink down to true
molecular electronics using organic semiconductors or similar. As Feynman
said, there's lots of room at the bottom. 3D molecular electronics are
exciting for me---architectures will have to be redesigned from scratch, maybe
RISCs too. It's an exciting time.

(I omit talking about quantum computing because that's currently mostly
theoretical. But back in the day, Turing, Church, et al were talking about
computability, etc when a real computer (von neumann architecture) did not
exist yet. So maybe (I would be confident enough to say probably) down the
line, we'll have a quantum computer, but it may not be soon.)

------
nextparadigms
I don't think we have to worry too much about it until 2030 or so. At least
until 2020 it should be a smooth ride, and then they'll probably stacks chips
on each other or find some work-around for the next decade or so, until
something else comes out.

------
redwood
In terms of comparing clock speed with, say, the brain: isn't it more the
parallelization that we're lacking, rather than the speed? In other words lets
say we _could_ parallelize a computer network as complicated as the brain...
with current processor speeds, couldn't we already re-create effective human
intelligence?

I guess I mean to suggest it's the parallelization/inter-connectiviy and
algorithmic challenge rather than the speed challenge that has so far avoided
the singularity.

Curious if others agree?

~~~
toemetoch
Something you're probably going to be interested in: Whole Brain Emulation.
[0]

Human intelligence is IMO something that is referred to in pop culture as
"emergent behavior" [1]. Problem is that you risk building the electronic
equivalent of a cargo cult [2]. After all, on a cellular level of a nervous
system, what's the difference between what happens in the brain of a chimp and
that of a human? How detailed do you have to go to build a platform that _can_
house intelligence?

On a philosophical level you kinda end up with a contradiction: is the human
brain complex enough to understand/build the complexity of a human brain?
Personally I'd be happier if research in this field was a bit more modest. I
once read (somewhere in the 90s) that there was a researcher in the U.S. who
was mapping every cell's behavior in the fly's "brain". With current tech, it
would be feasible to implement that into a simulator and see what pops out.
But doing the same for a human brain? Let's first try to come up with a
definition for fuzzy meta-sensations such as intelligence and emotions that
would translate to computer lingo.

[0] <http://www.youtube.com/watch?v=kRB6Qzx9oXs>

[1] <http://en.wikipedia.org/wiki/Emergence>

[2] <http://en.wikipedia.org/wiki/Cargo_cult>

~~~
redwood
Thanks for this great comment.

I'd suggest it's not a contradiction in so far as... we're not talking about
one human brain creating another. We're talking about the collaborative work
of countless human brains adding on to each other's work. E.g. parallelization
of brains is what enables great human strides, and so far seems quite
limitless :)

~~~
toemetoch
You're welcome.

 _We're talking about the collaborative work of countless human brains adding
on to each other's work._

Every bit of progress in humanity is the work of an individual, either working
individually or in group. Collaboration still revolves around individual
contributions. In a lot of cases (e.g. group discussions and brainstorms) we
get the impression that it's the group producing the result, but in reality
it's incremental individual work.

You're claiming that collaboration can overcome the issue (complex brain
understanding it's own complexity). I claim it can't : the ceiling of what can
be achieved depends on what the smartest brain (so to speak) can comprehend
and tie together.

An ant doesn't know how its colony functions, it takes a human to capture
that. Same for us, it takes a more evolved brain to understand what makes us
tick, we'll only sample aspects of it. A system can never be complex enough to
understand its own complexity.

------
jbooth
I'm not an electrical or thermal engineer, but even if we hit a wall as far as
process sizes, couldn't they make bigger dies with more cores?

Like, if we can't get much smaller than a 12-nm process, couldn't we just make
the die twice as big and put a big-ol heatsink on it to get to 128 cores or
whatever?

It seems like architecture is still a bigger barrier than process size when it
comes to getting >16 or >32 cores on the chip. How big will the current ring
bus architecture that intel has scale?

~~~
bryanlarsen
costs for larger dies do not scale linearly. A chip with twice the area is at
least 4x as expensive, if not more, depending on many factors like defect
density and die packing. But you're right that this is the direction that the
industry will move if process innovation stops. As a process matures, defect
rates drop, making larger chips more feasible.

~~~
jbooth
Thanks, I appreciate that.

So basically, larger dies == greater chance a given chip has a defect, bigger
cost and % of the wafer for each defective chip, and presumably more waste
silicon on the wafer as well?

------
nodata
I thought the increases in efficiency counted in place of the increases in
clockspeed.

------
_quasimodo
Thank god! Finally companies will start optimizing their software again.

