
Moore’s law really is dead this time - kens
http://arstechnica.com/information-technology/2016/02/moores-law-really-is-dead-this-time/
======
cm2187
The problem with Moore's law is that it is about transistors per ship but most
people interpret it as computing power.

Intel may have shrank the chip but the chip isn't really getting any faster.
One might argue that smaller chips means more cores packed on the same dice
but that's not true either. The vast majority of computers still only have 2
or 4 cores (in fact it has become impossible to find a 4 cores ultrabook,
something that existed a few years ago - Vaio z series).

So chips are not getting any faster. We're not getting more of them either.
They might consume a bit less energy. But as far as I can tell the icrease in
computing power is dead in the water and has been for 5 years.

Today the only reason to replace a computer is if someone spilled some coffee
on it.

~~~
eva1984
Yeah...ELI5, what are those additional transistors really doing here if no big
performance gain in benchmark or application, but in fact they still grow
exponentially until now?

~~~
alcari
They're acting as cache, to hide the latency to your RAM. They're implementing
increasingly niche instructions that speed up ever smaller sets of programs.
They're mostly sitting idle, because power usage hasn't fallen at nearly the
rate of increasing density, so switching them all at the same time is a great
way to start a fire.

A significant part of the exponential gains in the '90s and early 2000s was
that clock speed kept increasing. That's been stagnant for years because of
heat problems.

~~~
lmm
It's worth saying what's really happening here is that no-one cares about the
desktop. Heat is important for servers and mobile, and modern CPUs are
designed for that with desktops as an afterthought. On a desktop the abandoned
Pentium 4 architecture can clock up to 12GHz and will beat modern CPUs for at
least some workloads.

~~~
john_reel
Really? Are there any benchmarks available?

------
userbinator
Hopefully this means we'll finally see software optimisation become more
important; Moore's law encouraged developers to be wasteful and inefficient,
and it'll be good to see that come to an end.

~~~
moonshinefe
Hardware performance has so badly out performed most common software demands
in the last decade that it isn't an issue these days in most cases, in my
opinion. I use a 5 year old computer essentially and it can still play most
modern games on medium settings.

This wasn't nearly the case in the 90s-early 2000s (as far back as I go),
where one's computer would be obsolete and fail to run many programs even 2-3
years after purchase.

~~~
spdionis
Did you try to play fallout 4? :(

~~~
Jordrok
Fallout 4 was the game that finally got me to replace my ~9 year old
Frankenputer. Almost every part inside of that case had been replaced at least
once and it was still trucking along just fine for most games until F4. Even
on the lowest possible settings it would still slow to a crawl if too many
enemies showed up at once.

On the bright side, at this rate my new rig should last me even longer!

------
irremediable
I went to an interesting talk lately from an Nvidia research guy who believes
GPUs are the "next" Moore's law. He's upbeat about this; he thinks it'll give
a lot of jobs to lower-level programmers for the next decade or so.

~~~
mrec
Did he talk much about performance per watt? The trend with recent NV GPUs
seems to be to increase performance but at the cost of ever-increasing power
budgets; I'm starting to despair of ever getting anything that'll meet the
Oculus reqs without sounding like a 747 taking off and/or melting a hole in
the table.

~~~
yvdriess
GPUs are in an interesting place. They were not able to scale under 28nm due
to the increased static leakage. Intel CPUs were able to scale because of
dynamic power management, but GPUs cannot really use the same trick. It's only
with the upcoming 16nm GPU generation that they got static leakage to a place
where we see an actual new generation of GPU hardware. It's been about 5 years
now of 28nm + GDDR4.

~~~
thesz
The lengths you mentioned are lambdas - factors for on-die features. This
means that some part of the transistor cannot be smaller than lambda-size
square. Usually, they are much bigger and often are of equal size even for
wastly different lambdas.

Here's a table for SPARC CPUs, with process lambda, die size and transistor
count:
[https://en.wikipedia.org/wiki/SPARC#SPARC_microprocessor_imp...](https://en.wikipedia.org/wiki/SPARC#SPARC_microprocessor_implementations)

If transistors would reduce with lambda, then for two-fold reduction you would
get four fold increase of density. Difference in density between T4 and T2 (90
and 40 hn processes) is less than two.

The static leakage you mentioned is mainly due to sizes of transistor parts.

You should also know that NVidia uses humans as chip designers (they can
afford that), not an automatic process like _some_ fabless companies use.

And now I get to my main point: NVidia overshot with 28nm and got transistors
features that were too small, probably, due to not all that good software they
used to calculate models. They continue to overshoot for 5 years until they
corrected their overshoot. It has nothing to do with process lambda. It is
process error in NVidia.

------
leereeves
> The highly integrated chips used in these devices mean that it's desirable
> to build processors that aren't just logic and cache, but which also include
> RAM, power regulation, analog components for GPS, cellular, and Wi-Fi
> radios, or even microelectromechanical components such as gyroscopes and
> accelerometers.

This sounds like very bad news for companies that currently make those
components.

~~~
moonshinefe
It also is bad news for consumers who value choice and competition when
choosing components. I don't want to be locked into a single vendor for so
many things at once, it just lets them leverage more money out of the user due
to no alternatives.

~~~
yoz-y
The main issue with this is that such consumers are very few and far between.
The benefits of a single chip packing everything (such as better cost, battery
life and easier water proofing) are more important for most than the freedom
of choice. There have been times where I knew who made components of my
computer, they have long gone.

------
djcapelis
I like the ITRS roadmap as much as the next person, but maybe we could wait a
month to see what it says before reporting about it?

I imagine some folks in the right places already know what it is going to say
and this is ars giving us a piece of that, but... there's not tons here yet.
Would rather just read the actual roadmap in a month.

------
ekianjo
I like when a source claims it's dead, while not showing a proper graph to
explain how far we are from the actual prediction of the Law. Ars, you
couldn't try a little harder?

~~~
ars
Ars is not the source, the source is the people who makes the chips, who said
it won't increase anymore.

And when they actually say it ahead of time, instead of try and potentially
fail, that's when you know it's really over, they gave up on even trying.

~~~
mattlutze
They could also be planning to spend money on targets other than transistor
count, and need to attempt to stop Moore's Law from biting their collective
stock prices.

------
codeshaman
Well, then I guess it's not a law, is it ? If we called Moore's "observation",
then it wouldn't feel so tragic?..

------
Kiro
Why does Moore's law have so much authority? I mean, it seems like it's just
taken out of thin air. If it really would be a scientific "law" it surely
wouldn't randomly be 12 or 24 months? Seems philosophical if anything.

~~~
mfukar
It's a prediction. It held true for the longest amount of time.

~~~
hga
It was also based on economic "laws" of a sort. One of the biggest reasons to
keep pushing to smaller process nodes is that incremental costs went down, not
just due to increased density, but because all things being equal, a shrink of
an existing design resulted in much higher yields (for example, 5 bits of dust
on a wafer will kill a much smaller percentage of die with an equal number of
transistors, since the smaller process node will have more die).

------
pierre
And yet the CPU inside my phone is at least twice more powerfull than the one
I use 2 yeargo while consuming less powe. Maybe mobile phone CPU is where the
research is focussed now and wher moore law live?

------
vlehto
I'm not very much computer kind of guy. I've heard an argument "Strong AI will
eventually happen because of Moore's law".

Now is that bullshit, or can SSD and GPU improvements compensate?

~~~
adwn
> _Now is that bullshit_

That line of reasoning was bullshit then and is bullshit now. No strong AI
will magically jump out of your computer if you just make it fast enough,
because we still have no clue how to create true intelligence.

------
eecks
This is good for passwords right?

~~~
moonshinefe
Somewhat. But as far as I know most password cracking rigs these days use tons
and tons of GPUs for hashed password brute forcing. I'm not sure what the
limit is as to how many GPUs you can use simultaneously, but they sort of
sidestep this hardware progress issue by just distributing the guesses.

(A bit of a dated article, but if interested:
[http://arstechnica.com/security/2012/12/25-gpu-cluster-
crack...](http://arstechnica.com/security/2012/12/25-gpu-cluster-cracks-every-
standard-windows-password-in-6-hours/))

------
puppetmaster3
Disagree. GPU. Vector computing. Ex: SSL is faster.

~~~
lumpypua
Those are proof Moore's law is dying. We're stuck eeking out more work per
transistor with specialized approaches because we can't just throw more
transistors at the problem.

~~~
leereeves
Parallel computing became important recently because even though transistor
density was still increasing, heat issues forced the division of CPUs into
multiple cores. Parallel computing was required to use all the transistors.

With the end of Moore's law, we won't get more performance with that approach
either.

