

Ask HN:have moore's law slowed innovation in chip manufacturing? - hershel

There&#x27;s this contrarian conversation in reddit about the possibility that moore&#x27;s law has greatly slowed innovation in the chip manufacturing industry[1].<p>What does HN think about this ?<p>[1]http:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;programming&#x2F;comments&#x2F;229ggx&#x2F;the_future_doesnt_have_to_be_incremental&#x2F;cglalgg
======
zackmorris
As far as I can tell, Moore's Law ended about 10 years ago, because 3 GHz CPUs
were available then and most CPUs are slower than that now. Sure transistor
counts have gone up, but the amount of computation that can be done in a
single thread hasn't seen the 100x speedup that it did in previous decades.
Mostly there's only been cheating by adding pipeline stages or ram latencies
to get the frequencies up.

Also just before I graduated college in 1999 I remember my VLSI professors
talking about how chips had passed the point where the interconnect had a
square cross section and were moving to rectangular. So today chips look like
skyscrapers, with tall ribbons of interconnect that suffer from crosstalk.

On top of that, there's been failure to move to distributed and multicore
processing outside of GPUs, or for FPGAs to get adopted by the mainstream.
I've given up on innovation on the hardware front from the big players, but
I'm optimistic that approaches like Google's Go running inside hypervisors on
diverse hardware is going to bring the high throughput we see in gaming and
DSP to the mainstream. I’m unimpressed by OpenCL and CUDA and won’t really
consider parallel computation as having arrived until more readable languages
like Python and MATLAB generate accelerated code for us.

I realize there are holes in my statements here that one could fly a 747
through, but when the single fastest increase in my computing in a decade came
from installing a 512 GB SSD drive, it means that something went terribly
wrong. Moore’s Law ending could have more to do with computers reaching a
point of being “good enough”, essentially becoming disposable appliances, than
limitations with technology or cleverness. So they may not be getting any
faster, but they are becoming so cheap that I think we’ll see life change in
rather interesting ways in coming years.

~~~
Perdition
Moore's law is only about transistor density, the clock speed race of the 90's
mirrored Moore's law but was only indirectly related (in that the smaller you
make a circuit the less time a signal needs to propagate through it).

The improvement in GPUs is all in line with Moore's law.

>On top of that, there's been failure to move to distributed and multicore
processing outside of GPUs

Few ordinary tasks besides graphics processing benefit significant from multi-
threading, and the resulting program complexity means developers don't bother
with it for minor improvements.

There are other benefits from the power of modern processors, such as running
programs in sandboxes to improve security, and being able to run multiple CPU
intense programs on separate cores.

> but when the single fastest increase in my computing in a decade came from
> installing a 512 GB SSD drive, it means that something went terribly wrong.

No it doesn't, it is just that SSDs cut disk access speeds more in absolute
terms than going from 1Ghz to 4Ghz improves program speed. A modern 4Ghz CPU
is more than 4 times faster than a 1Ghz early 2000's CPU, but in absolute
terms the speed up is small (ie 500 ms to 50 ms is more noticeable than 50 ms
to 5 ms).

Try running a CPU bound task (the ordinary user doesn't have these) on a early
2000's CPU compared to a modern CPU.

~~~
zackmorris
Ya I actually completely agree with everything you said, my main complaint
though is that the difference between my 16 MHz LC II in the early 90s and my
blue 333 Mhz G3 iMac was almost two orders of magnitude, whereas my 2.3 GHz i5
Mac Mini only feels say 5 or 10 times faster than my iMac did. The question I
ask myself is if Moore's law can no longer lead to faster computing, then what
good is it?

For parallelism, I have lost hope that there will be much progress in the
compiler automagically parallelizing things like for loops in C++. There's
some cool stuff with SSE etc, but it all feels fairly ineffectual to me.
Probably people will just move to a new language like Go or Haskell/Scala and
just use C for glue code. I could maybe see something like Groovy for low
level languages though, where it makes global variables illegal and does some
other things to eliminate side effects and then compiles to, say, Go for the
10/100/1000x speedups we'd expect for things like graphics or matrix math.

~~~
Perdition
>The question I ask myself is if Moore's law can no longer lead to faster
computing, then what good is it?

Smaller and hence lower power chips. You wouldn't get multi-GHz phone CPUs if
it wasn't for Moore's law allowing massive reductions in power per cycle. Even
on the desktop this is good, my current graphics card is about twice as
powerful as the one I bought in 2010 and uses only 1/3rd the power, which
means the system fans don't have to run as loud to keep temperatures down.

I don't think mobile devices need much more processing power, but every
reduction in power use is good.

------
mud_dauber
Moore's Law hasn't slowed innovation. The cost of fab toolsets and the cost of
chip design are the culprits.

Many cutting-edge tools (etchers, deposition, others) now come with price tags
of >$5M. A new 200mm fab can easily run north of $4B. The relentless trend of
feature size shrinkage is still happening - some designs are underway using
16nm FinFet technology - but at the cost of increased power consumption and
increased cost-per-transistor. I believe it was Nvidia that put a shot across
the bow of chip manufacturers by saying they saw no economic advantage in
chasing the lastest process node.

A clean-sheet-of-paper microprocessor design can easily hit $50M in expenses
before the first chip is seen. Much of that expense is sheer manpower &
verification processing cycles - a billion transistors' worth of real estate
is worthless unless you can run a test suite against it. If you're spending
>$50M on a new design, you'd better have a locked-in customer. Projecting that
amount of investment for a squishy market like a new consumer electronics
widge is the surest way to get your project killed.

Running a microprocessor at ever-greater frequencies doesn't help either.
First, you're cooking your chip. Frequency = heat. Second, every processor
vendor in the world is limited by the processor-memory chokepoint. You can't
read data & code from memory into the processor(s) fast enough to keep the
processor pipeline full. And the gap is widening.

------
wmf
If some company could have built 10x better transistors than the competition
they could have made billions, if not tens of billions. Yet no company did
this. The simplest explanation is that it wasn't possible.

~~~
hershel
Technologically it does seem possible. But if they have built it, won't
somebody else built similar factories, and than a race to the bottom would
have begun and nobody would have made that much money , especially relative to
the money they made with new generation every 18 months ?

~~~
wmf
It certainly would lead to a lot of risk if there wasn't a consensus roadmap;
once company A announced 10x, company B might try to one-up them with 30x then
fail, etc.

