

Bill Dally: Life After Moore's Law - _delirium
http://www.forbes.com/2010/04/29/moores-law-computing-processing-opinions-contributors-bill-dally.html

======
hga
Feh. Yeah another example of why I stopped reading _Forbes_ : if their
articles in an area I know well are abysmal then how can I trust anything they
say in areas I know little or nothing about?

In this case, the author confounds the real Moore's Law, which states that in
a certain time period---1 year in 1965, 2 years later---the number of
transistors of the lowest cost you can put on a die will roughly double with
the resulting "speed".

Amazingly, Moore's Law is still holding up, but not the implicit promise of
increased speed:

There was a corollary of generally increased clock speeds as everything got
smaller, but as noted in the article we hit a brick wall there due to power
dissipation (e.g. see the Netburst (P4) microarchitecture disaster).

But from my informal understanding of CPU speed, the greatest gains have come
from better microarchitectures. There are fairly clean examples of this in
e.g. the evolution of the 680x0 and 386 and beyond lines of microprocessors,
and we can see this today in examples of x86-64 and ARM processors that are
superscaler (or not?) and in or out-of-order (Atom vs. most everything else,
Cortex A8 vs A9, etc.)

Our problem here is of course that the microarchitects have run out of the big
wins, e.g. "Per Core, clock-for-clock, Nehalem provides a 15–20% increase in
performance compared to Penryn"
([http://en.wikipedia.org/wiki/Nehalem_%28microarchitecture%29...](http://en.wikipedia.org/wiki/Nehalem_%28microarchitecture%29#Performance_and_power_improvements)).
Nice, but not a _big_ win, not like the 386 -> 486 -> P5 -> P6 steps, where
among many other things cache was added, it went superscaler and it went out-
of-order superscaler.

Maybe the author gets this but it didn't emerge in the final draft, but I have
no way of knowing that....

~~~
_delirium
Well, the author's the former chair of Stanford's CS department and author of
a standard computer architecture textbook, so whatever the reason, his lack of
knowledge about computer architecture seems not likely to be the reason...

On the other hand, he's currently a VP at NVIDIA, so isn't entirely unbiased,
either, when it comes to spinning the current trajectory of CPUs/GPUs.

