

World With New Limits: The Coming End of Moore's Law - plessthanpt05
http://www.linuxinsider.com/story/World-With-New-Limits-The-Coming-End-of-Moores-Law-79015.html

======
leokun
The end of Moore's Law is like the Voyager leaves the solar system kind of
post that just keeps repeating ad infinitum on HN. There are real drawbacks to
popular upvotes as means to populate the homepage feed. I hope someday someone
figures this out.

~~~
mikeash
Unlike Voyager, this story long predates HN. I remember reading stories like
this in the 90s.

At some point, of course, it'll actually happen, and that latest round of such
stories will actually be correct. It could even be that this round is that
round. But I certainly don't see anything that makes it worth paying any more
attention to this one than the ones that have been showing up for decades.

------
eloff
Actually, I expect Moore's law to speed up. Yes Moore's law will end at some
point, (in fact physics would tell us it must end this century) but it won't
end with the death of the silicon transistor and it won't play out the way
people seem to think.

Without the ability to keep shrinking silicon transistors, the industry will
innovate in new directions. Eventually (perhaps after a period of relative
stagnation in Moore's law.) they will hit on something that is economical and
that allows further scaling (graphene? who knows?) and that's when things get
really interesting. We've been using the same technology for a long time now
with predictable, incremental improvements in shrinking the process. When we
switch to a wildly new technology with wildly new characteristics and limits
we're likely to see some order of magnitude improvements. It's this exciting
transition period that will result, in my estimation, in not just the fastest
growth of in the speed of new processors and memory, but a general uptick in
the rate of innovation in the industry in general. We've gotten too
comfortable with silicon for too long. Change is in the air.

~~~
devx
Yes, but even graphene won't allow us to shrink transistors anymore, and this
is what all these "end of Moore's Law" articles are really talking about.

> _" Moore's law is the observation that, over the history of computing
> hardware, the number of transistors on integrated circuits doubles
> approximately every two years."_

So that won't be possible anymore, even if we switch to graphene. But if we
can actually make graphene chips that allow us to increase the clock speed
every year from 5 Ghz, then 7 Ghz, then 10 Ghz, and so on up to 200 Ghz or
whatever, then that can be a "solution" for continuing to double performance
every couple of years or so.

But if graphene actually has that sort of potential and is not just all hype
right now, then maybe that can hold us until 2030-2040, and hopefully by then
we'll know how to make quantum computers that can surpass those computers in
"general" performance, and can also be used to easily build "apps" on top of
them.

------
MAGZine
Also coming: the heat death of the universe.

I'm not saying that it's not far off, but the only thing I care to hear about
Moores law is when it is finally obsolete. There are more interesting things
to write articles about.

------
Filligree
So what this is saying is that there are limits to how far you can push photo-
lithography. Maybe. Yes, it's probably true, but as for how relevant that
is...

I'd like to point out that there's an existence proof for putting the power of
~some very large supercomputer into the space of a human head, running off a
hundred watts or so.

I am, of course, referring to the brain itself. It's a very different sort of
architecture, that's true, but does anyone really think we'll stop before we
get close to that?

~~~
ajuc
Are we sure brain have more raw power than PC? Because when I try to
outperform my PC at multiplying floats I got owned every time..

Maybe brain is faster at the things it's faster only because it's hardwired to
do these things, and we're incorrectly comparing it to general purpose CPU?
And brain has this very slow turing machine emulator on top of millions of
special-purpose circuits.

~~~
Filligree
We're quite sure it has more raw power, yes.

And a lot of its power comes from special-purpose circuits, yep; you're only
wrong in the order of magnitude. (Billions, not millions)

It also has a couple other 'advantages'. For instance, running at 100Hz means
far lower power consumption. Of course, it also means many problems can only
be solved through precaching; notice how much better computers are than people
at, say, bouncing a ball, while using ludicrously less cpu-time.

For the raw power calculation, though...

We don't have a perfect model of neurons yet, but it appears likely that the
average neuron can be modelled by a circuit of 500 or so transistors;
contrariwise, a single neuron can easily model a single transistor. Neurons
are more complex, no question about it; I believe their behaviour was once
compared to vacuum tubes, although that's incorrect. The synapse count is more
typically used for comparisons; problem is, each neuron has multiple synapses,
and they're not quite independent.

To make the problem more complex, synapses double as memory devices; details
of the chemistry of a single synapse can change fairly quickly, apparently
allowing for long-term memory and skill formation without changing the gross
structure of the brain (that is, graph connectivity). Though the connectivity
also changes. The total storage capacity of the brain has been estimated at
anything from a few gigabytes (very, _very_ good compression algorithms; you
might even call them AI-complete) to several terabytes, but it's nothing to
write home about.

Now, there are 6 billion transistors in the most recent CPUs, and 85 billion
synapses in the human brain. Naively, this would make the human brain anywhere
from 14 to 7000 times more powerful than a modern i7, but of course there are
confounding factors.

For starters, the aforementioned frequency differences. The i7 can run serial
algorithms; the brain essentially cannot. Any given thought you have may be
produced over a fairly long period, but it could be no more than a second; in
the latter case, this means only 100 cycles. So it's pretty much got to be put
together from previously-made components... meanwhile, the i7 can run several
billion cycles of serial logic in that period. This doesn't just mean doing
single things faster; per Amdahl's law there are some algorithms that cannot
be usefully parallelized, so the i7 can run them while you cannot, and it may
therefore be able to do the same thing _using less overall resources_.

Okay, so that makes the brain seem less impressive. It's especially the case
for motor functions - a great deal of the brain is dedicated to that, largely
because there's no time to do longwinded calculations, so it all has to be
precached. Which is why you need a lot of practice to reliably throw a
basketball. It's an immense waste of space - a very cheap computer can do
better, where we've invented the algorithms. Our robots are getting pretty
good at moving, in some constrained circumstances.

On the flip side there are problems that appear to be inherently parallel,
such as face-recognition; all our best approaches so far essentially amount to
checking every part of the image against every conceivable possibility. Here
the brain has a major advantage, because neurons are damned well suited for
the purpose; one might call them well-tuned to running neural networks. Where
we're using general-purpose hardware to check several possibilities one after
another the brain can have cheap neurons hanging out on the visual cortex,
listening for particular signals, and never doing anything else.

Conclusion? Heh, can't say I have one, except.. the brain uses special-purpose
circuitry everywhere, but half the time it's from a lack of choice. It can't
really do anything _else_. Regardless, technologically it's still a generation
ahead of us, packing more power into a smaller space and energy budget than we
can hope to.. but not two, or three, and our own speedy general-purpose
systems have their own advantages that will eventually let them outperform it.

EDIT: I should clarify, by 'generation' I mean a human generation - thirty
years, not three. At the current rate of progress, which is likely irrelevant;
we've already got good enough computers for AI (finally), we just need to use
a lot of them.

------
mathattack
Am I the only one who thinks we don't need Moore's Law anymore? The game has
moved from computing to communication.

~~~
devx
You need to think bigger and deeper than "using computers for Facebook".

1 billion times faster computers could help us manipulate weather, terraform
planets, solve death, have a much better understanding of physics at a much
more fundamental level, create fusion or matter-anti-matter engines, etc.

~~~
mathattack
Right. But can't most anything that is computable be done with existing
technology operating in parallel? (I'm not sure about solving death, but in
general high performance computing)

I'm interested in what's inherent about the doubling per integrated circuit
that makes it the limit, rather than just using a lot of circuits. Is it power
related? Does it have to do with the Von Neumann bottleneck?

For better or worse, I think 3-D videoconferencing is the short term use that
will drive both bandwidth and computing needs.

------
ChikkaChiChi
Adolescent technology can be optimized according to Moore's Law but Intel and
other company's started moving the goal posts after the Pentium 4.

------
devx
Why is it so hard to understand that "Moore's Law" is coming to an end - for
silicon chips at least?

The point of these stories is that we're getting to transistors being as big
as atoms, and I think that's happening roughly around 2-3nm for silicon.

Whether we'll still be able to improve performance for "computers", by
switching to graphene transistors, which even if we can't shrink those
anymore, we could maybe "optimize" and harvest all the performance from a
graphene transistor. But that is probably going to keep us another 10 years or
so.

There aren't that many ways out - until we start making quantum computers, and
hopefully silicon and graphene transistors will hold us until then.

~~~
mikeash
It has to end at some point, certainly. But there's no particular reason to
think it's now. Current technology for chips has been near the end since the
90s. So far, something has always come along to avoid the end.

~~~
sharpneli
In a sense it has already happened. The clock speeds in production chips have
not increased markedly in the last 10 or so years.

Take a program that has lot of unpredictable branching, random memory accesses
and heavily interdependent calculation (the next command always depends on the
previous one) and you are not really much faster than old CPU's.

The speedups in single core performance in recent years have mostly come from
increasing bandwidth, automatically taking advantage of some concurrency (Out
of order execution), more and more SIMD instructions etc etc.

Interestingly enough if you want to write very high performance CPU code it's
almost like writing OpenCL for GPU execution. I'd go so far to even claim that
right now the best way to write high performance CPU code is to use Intel's
OpenCL implementation for their CPU.

~~~
mikeash
Moore's Law was never about clock speeds. It was always about feature size
i.e. number of transistors per square inch.

For a long time, transistor count directly enabled higher clock speeds, and so
people conflated the two. But the original statement of the Law has no mention
of clock speeds.

Why does it matter _why_ single-core performance has continued to improve? It
used to be clock speed, now it's something else, but it's still getting better
all the time.

