

What will happen when Clock Speeds stop increasing? - jonnytran
http://glomek.blogspot.com/2008/09/when-clock-speeds-stop-increasing.html

======
iigs
I agree with the sentiment (wouldn't it be nice to not treat computers,
particularly full of nasty chemicals, as items meant to be thrown away every
other year).

Unfortunately you will find that this isn't the way it works. Manufacturers
will keep rebundling the same general parts in slightly different
arrangements, with fresh design, and sell it to the disposable-product loving
public. Even beyond the core processor, a PC is made up of a ton of little
accessories that require drivers, and these are going to keep shifting around
for the next couple decades. Linux is arguably better than Windows at
supporting old outdated hardware, but good luck finding current OS support for
anything but the most popular 1993 (now() - interval 15 year) products in the
latest kernels.

In my opinion the precedent was set by the auto industry. Sure there are a few
people who collect any given type of car and daily drive it 40 years later,
but for every one of those there are hundreds of people who keep their old car
only as a curiosity (i.e. show car), and for every one of _those_ there are
probably hundreds of cars that are trashed and sitting in junkyards or have
been recycled into new ones.

~~~
nihilocrat
A more positive way of expressing the sentiment might be "manufacturers will
find a way of improving performance, though the gains might be harder to
capitalize on than clock speeds or core count". Either way, given that the
industry can't survive selling old products, they will find a way of selling
us more stuff. We can only hope that we actually get some benefit out of the
new products.

On the plus side, notice that today, with a computer that is 5 or so years
old, you can get on the internet, watch flash video, play music, etc. etc.
Compare that to using a 5 year old PC in 2000, or even worse, a 5 year old PC
in 1995.

~~~
DabAsteroid
_"manufacturers will find a way of improving performance, though the gains
might be harder to capitalize on than clock speeds or core count"._

Don't count cores out. Intel isn't.

[http://www.ciol.com/Intel/CP-Intel-CIO-Update/Future-
Intel-d...](http://www.ciol.com/Intel/CP-Intel-CIO-Update/Future-Intel-design-
codenamed-Larrabee/8908110053/0/)

 _Larrabee is expected to kick start an industry-wide effort to create and
optimize software for the dozens, hundreds and thousands of cores expected to
power future computers._

------
scott_s
We haven't had a significant clock speed increase in a while; it's pretty much
already happened.

~~~
technoguyrob
Will I ever be able to buy a 10Ghz CPU?

Someone please comfort me.

~~~
jfoutz
No, i don't think so. there are 10ghz transistors, but the're gallium arsenide
rather than silicon. I'm going to pretend that i know (i don't), if you try to
go much faster than 5ghz with silicon you have to use smaller electron bundles
-- lower votage -- because the chips will melt if the voltage is upped to
much. because the signal is less clear, undetected/uncorrected errors start
happening all the time. remember, at 10ghz 1 in a billion mistakes is
terrifying.

 _hugs_

~~~
Retric
I thought some of the ALU's where double pumped in p4 and newer cpu's. If
that's the case we are already running some part's of the CPU at 3.7 * 2 =
7.2Ghz.

Edit: Yea, the integer ALU's operate at twice clock speed. I think the move to
64bit CPU's slowed down the clock speed race as did adding more cores and
insane amounts of L2 cache due to heat issues. As a side note when I picked up
my quad core CPU at 2.4 GHz it crushed my old 1.8Ghz 32bit cpu so I think they
are making some wise tadeoffs due to system buss limitations. Plus the old
P4's seemed to trade off a lot of effective for pure clock speed. So after we
get back on track we might see a new clock speed race when we have 8 - 32
cores.

~~~
scott_s
You put the carriage in front of the horse: we're moving to multiple cores
_because_ we can't increase the clock speed. The number of transistors we can
fit on a chip is still following Moore's Law, we're just using them in a
different way.

We were squeezing more and more performance out of single cores by lengthening
the instruction pipeline, which increased the amount of instruction
parallelism processor's could exploit at runtime. The difficulty is that as
this pipeline is increased, it takes longer to send information across it. As
we decrease cycle time (same as increasing the clock speed), it becomes harder
and harder to communicate from one end of the pipeline to the other in a
single clock cycle.

------
MoeDrippins
Parallelization + horizontal scaling is my guess. If you can't scale up, scale
out.

------
akeefer
The author definitely makes a mistake in equating slow programs with poorly-
written programs. Yes, it's true that people who write horrible code often
also write horribly non-performant code, and that rushing a product to market
without performance testing is far too common in our industry. But in my
experience (and I've done a lot of performance tuning), making something
perform generally requires selectively damaging the code by making it less
understandable and by breaking rules about encapsulation and separation
between layers. There's a reason why that whole "premature optimization is the
root of all evil" saying is so popular. Performance tuning is also incredibly
time-consuming and labor intensive. It might make you feel more hardcore to do
it, and it certainly be a useful intellectual exercise, but it's still time
that could well be spent on other types of product improvement.

The increase in clock speeds has been a primary driver behind the ability of
people to use higher-level languages, as well as the ability for people to
write larger and more complicated applications. So assuming that a stop to the
increase in clockspeeds will lead to better software is flat-out wrong; if
anything it'll help to essentially halt certain kinds of progress in language
and framework development by preventing people from using higher-level
abstractions that don't perform as well.

------
drinian
There will always be new uses for more power, ones that we can't see today.
It's silly to say you'd be happy to see that go away.

And, yes, I think a lot of the gains in future computing power will be in
parallelization, but that wasn't the point of this article.

------
ars
No more singularity is the most obvious effect.

I don't see people talking about this, but the exponential curve broke.

Sure you can add cores, but you could do that 5 years ago (or earlier) too -
and you don't see AI made even by well funded organizations.

It's over - don't wait for it.

~~~
hugh
Relax. Even if we had the hardware today we'd still have no idea how to write
passable AI software. By the time somebody figures that out, maybe the
hardware will have caught up.

~~~
Herring
We do know how to write AI, problem is it stops being AI just as soon as we've
written it.

------
speek
Asynchronous processors will be here soon!

~~~
jfoutz
nope. coders have to figure out threads first. once there's a nice model you
can teach a good EE student, we get to use fewer transistors on clocks.

------
zandorg
I wrote software that finds textual graphics in a bitmap. I parallelised it by
running, say, 4 (or any number of) threads, each handling 1/4 of the bitmap.
It works really well, and there's no overlap.

~~~
queensnake
That's called an 'embarrassingly parallel' problem, it's not the general case.
I guess, just like 'big O', people have to take a class to appreciate the
space of what it's possible to parallelize and how much it buys you.

~~~
zandorg
At University we used to refer to big O as Roy Orbison. Just an example of our
kind of smalltown humour.

