
Moore’s Law Hits 50, but It May Not See 60 - simas
http://recode.net/2015/04/15/moores-law-hits-50-but-it-may-not-see-60/
======
pierre
Also in 2005: Moore law Hits 40, but it may not see 50!

[http://www.slate.com/articles/technology/technology/2005/12/...](http://www.slate.com/articles/technology/technology/2005/12/the_end_of_moores_law.html)

~~~
alephnil
And to some extent it did end there. Before that time it was assumed by many
that Moore's law indicated the sequential processing power would double every
two years as well, which ended around that time. After that most increase in
processing power have been in form of more cores, which means that you have to
parallelize your programs to get performance.

Of cause Moore's law is about the number of transistors on a chip, and for
that it has continued to be true. Physics do mandate that it will end at some
point, when the size reach sub-atomic sized, so it won't be 100.

~~~
phaemon
The fastest CPU in 2005 was (I think) the mighty AMD Athlon 64 FX-57, which
PassMark rates at: 731.

Meanwhile, in 2015, we have the Intel Core i7-5930K which scores: 13,638.

~~~
Someone
Ten years is over six of Moore's law 18 month periods, so that 'should' give a
factor of over 64 (ignoring the difference between transistor density and
speed)

Your numbers show a speed up of less than 20.

~~~
phaemon
> so that 'should' give a factor of over 64

No, that's not how trends work. You can't establish a trend from just two data
points. And, given a trend, you can't just pick two points and say they should
be so far apart. The only way that would work is if all the data points were
very close to the trendline. And the only way that would happen is if CPU
development happened very smoothly; which it doesn't. You have spurts of
performance increase as new architectures are released.

The only thing I was showing with those two figures was that it's clearly not
true that speed stopped increasing in 2005. If you want to show there's a
different trend, then you need multiple data points and compare the recent
trend with the longer term trend to see if they're significantly different.

~~~
sliken
Right, lies, damn lies, and statistics.

You could pick data points to make either case. However if you plot transistor
density, performance, clock speed, flops/watt, or other metric people
attribute to moore's law you'll find the curve isn't holding. Improvements
aren't stopping, but aren't doubling every 18 months either.

Take for instance 3 generations of intel chips (Sandy Bridge, Ivy bridge and
Haswell), Nvidia GPUs, or Arm chips. Sure a new feature might do encryption,
floating point, or some unusual graphic feature twice as well, but that's
small fraction of the normal use case for that chip.

It's gotten so bad that nvidia often just renamed the chip, not actually
revising the silicon between generations. Arm's new 64-bit generation of CPUS
(cortex a53 and a57) aren't improving much at all over last years arm cores.
Intel's new GPUs (one of the most parallel friendly workloads) aren't much
better than 2 generations ago. Compare the best of the HD6000 intel GPUs to
the old HD4000 2 genrations ago.

So by exactly what metric can support the idea that Moore's law isn't dead?
Clockspeed? Performance? Transistor budget? Number of cores?

What today (april 2015) is 8 times better than 4.5 years ago?

------
yaantc
In a sense Moore's law is already over. It's not only about transistor
density, there's an economic side too: the original paper talk about the
density of the _least cost_ process. So the law ends either when we can't
increase the density, or when we can't do it without raising the cost.

Now look what Samsung says (from
[http://www.eetimes.com/document.asp?doc_id=1326369](http://www.eetimes.com/document.asp?doc_id=1326369)):
"The cost per transistor has increased in 14nm FinFETs and will continue to do
so, Low said". Now I know Intel has a different take on this, but a few months
ago a JP Morgan report said that Intel cast wast TSMC selling price... They
have more margin to reduce costs.

And that's only the marginal cost of transistors. The NRE costs to design a
new chip (design & tests, masks) are increasing with each new process nodes.
For massive volume players (Intel, Apple, Qualcomm...) that's acceptable. For
low to mid volume players these NRE costs must be factored in, and they
penalize new nodes. A lot of smaller players stop way before the latest
14/16FF. 40nm is common, and 28 will see a lot of people stop there (the last
node with no double patterning).

For an interesting analysis of the cost factors in semiconductors, here's a
nice blog: [https://www.semiwiki.com/forum/content/4522-moore%92s-law-
de...](https://www.semiwiki.com/forum/content/4522-moore%92s-law-dead-long-
live-moore%92s-law-%96-part-1-a.html)

BTW, the "least cost" part is very important. The cost of developing and
putting in production each new process node is increasing. But as long as the
cost per transistor (with NRE amortized ;) goes down, the accessible market is
increasing and can bear the increasing cost. When costs start to increase, you
need a reason to pay more for the newer node. That's performance (whether
speed, power efficiency or a mix). And not everybody will accept to pay for
better, some are ok with the "old" nodes. So the market base is not
increasing, it's reducing. At some point it won't bear the massive R&D needed.
This is not a nice though for all the experts involved in this expensive game.

~~~
minthd
Sure there are many problems in moore's law, and it's definetly slowing(not
sure yet about stopping, because maybe euv lithography will work, or maybe
monolithic 3d will - quallcomm is said to sells such chips in 2016, or maybe
something else). Also there are interesting innovations in the higher levels -
how to build much smaller SRAM cells, the use of memristors, etc - which could
still support a declining cost for electronics.

But i don't agree that the market base is declining.Two major factors in the
market base($$) are - the value of uses of computing, and how much of that
value chip manufacturers extract.And there are some really killer apps with
plenty of value in the future(virtual reality, AI in all it's forms, new large
data sets to work with, emerging markets, etc). Also chip makers in many
markets get a small share of the created value, and might get more.

As for the problem of small/large players and design costs - the solution is
platforms, whether GPU/FPGA/CPU or more exotic programmable platform like
specialized computer vision chips, or whether it's using chip where some of
the layers are already created(and standard) and just creating the rest(like
easic is selling) ,which can lead to great design cost reduction and
relatively good efficiency.

So in general i tend to agree with some guy from darpa who said that we're
near the end of moore's law, but we might still be able to get a total of 50x
improvement from a combination of things.

~~~
marcosdumay
> Two major factors in the market base($$) are - the value of uses of
> computing, and how much of that value chip manufacturers extract.

You are defining the market as the entire market for chips, while the GP is
talking about the market for next generation chips only.

Your definition is useless for a company deciding if they invest in R&D for
improving their chips or not. The GP definition is the one Moore's law depends
upon.

~~~
minthd
For most of the markets i've defined lower transistor costs and lower power
are critical factors and assuming the next generation provides both, they must
improve their chips to compete.

------
ghshephard
It's somewhat comedic how the author references his own article (from which
80% of this article is cribbed):

[http://www.forbes.com/2005/04/15/cx_ah_0415tentech.html](http://www.forbes.com/2005/04/15/cx_ah_0415tentech.html)

------
Sir_Substance
>An object of 14 nanometers in size is smaller than a typical virus cell, and
about the equivalent to the thickness of the outer cell wall of a typical
germ.

You can tell this guy's not a medical journalist.

~~~
concerned_user
I've always thought that viruses don't have cells and are simply DNA molecules
floating around.

~~~
learnstats2
I understand that he is using "cell" to refer specifically to a single virus
particle - a virion.

------
praeivis
I predict headers for 2025: Moore;s Law Hits 60, but It May Not See 70.

------
sliken
In what sense isn't it dead already? Transistors per chip, clock rate, and
performance haven't doubled for several generations already. Take Nvidia GPUs,
Intel CPUs, or Arm CPUs for instance.

~~~
Gurkenmaster
You're right. The transistor count only increased by 90% every 18 months.

~~~
marcosdumay
What's important is that they are increasing by a smaller ratio every
generation; generations are taking longer and longer; and price/transistor is
increasing steadily.

In other words, Moore's Law is dead already.

------
fizixer
What really matters is if we've made enough hardware progress to allow us to
develop strong AI.

And IMO the answer is a resounding 'yes'.

~~~
marcosdumay
At the 90's, the MIT published a lower bound estimate of the processing
capacity of a person's brain. We are currently entering the era when "all the
computers, added" have reached this capacity.

I simply have no idea where did you got that "resound yes" from.

~~~
fizixer
You didn't provide the source of your MIT study but that's okay because:

\- I didn't provide any source either.

\- But more importantly, I'm well aware that there are studies out there that
grossly overestimate the information processing capacity of a human brain.

Please see this link:
[http://en.wikipedia.org/wiki/Computer_performance_by_orders_...](http://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude)

In the petascale section, the top supercomputer as of today Tianhe-2 (~33
petaflops) is listed right next to the computing power of human brain (~36
petaflops).

Also this link is relevant:
[http://cs.stackexchange.com/questions/20016](http://cs.stackexchange.com/questions/20016)

But it lists many mutually disagreeing estimates. About half of the estimates
indicate we have already achieved the capability (in the form of the top
supercomputer, not necessarily in the form of a sub-$1000 computer).

I should point out the accepted answer in particular. I have spent some time
on the RIKEN simulation story (1 second of activity of 1% of human brain took
40 minutes on a 10 petaflops RIKEN supercomputer) in the past and I'm
beginning to believe it's nothing more than a marketing gimmick:

\- They never released a comprehensive scientific paper, detailing the methods
they used.

\- 1% refers to ~1.73 billion neurons, which is really 2% of the whole CNS
(~86 billion neurons). But calling the whole CNS human brain is nonsense. The
brain only has about ~19-23 billion neurons. So in reality it was ~10% of
human brain.

\- 'Simulation' could have vastly different definitions, and associated
computational complexities. If they were solving differential equations of
human brain's biochemistry then I wouldn't be surprised it took 40 minutes to
simulate 1 second of activity of 1.73 billion neurons. But that is
computationally expensive, and a wasteful way of doing brain simulation,
because we're mostly concerned with the information-processing nature of the
neurons, not their biochemistry.

Furthermore:

The top supercomputer as of today, Tianhe-2, is almost two years old. Do you
know how much it costs today to buy GPUs worth 33 petaflops? AMD's R9 290X is
a 6 teraflop GPU costing $250. You do the math. (Hint: It's less than $1.5
million! Tianhe-2 was a $390 million project, Titan $97 million at ~18
petaflops).

Finally:

If you plan to run a dedicated piece of software on your hardware, FPGA blows
GPU out of the water, and an ASIC blows FPGA out of the water. If someone
today implements a deep learning system (just giving an example, not claiming
it's the end-all-be-all of strong AI) on an ASIC, I wouldn't be suprised if
one such card performs equivalent of between 100 and 1000 teraflops of a GPU's
performance.

And I'm talking about 28nm process. 22nm would only sweeten the deal.

So yes, Moore's law has been a great help in its leg in the "relay race"
lasting ~50 years. Now it's time to pass the baton to the next leg, optimize
the architecture and put some strong AI software in it.

------
yogthos
In related news vacuum tubes can only be made so small!

------
JacobAldridge
I think we need a new Law: "The number of articles predicting the imminent end
of Moore's Law doubles every 18 months."

~~~
avn2109
Aldridge's Law

~~~
herbig
t3knomanser's law:

[http://www.fark.com/comments/7215487/Moores-Law-no-
more](http://www.fark.com/comments/7215487/Moores-Law-no-more)

------
pinkunicorn
Maybe Moore was short sighted.

