
10GHz at under 1V by 2005 - The future of Intel’s manufacturing processes [2000] - cft
http://www.anandtech.com/show/680/6
======
wmf
Article says: _Obviously this 8 – 10GHz clock range would be based on Intel’s
0.07-micron process that is forecasted to debut in 2005. These processors will
run at less than 1 volt, 0.85v being the current estimate._

Intel introduced a 65 nm (0.065 micron) process in 2006. The "Cedar Mill"
Pentium 4 processor ran at 3.6 GHz at a whopping 1.3V although a small double-
pumped part of the processor ran at 7.2 GHz. It could be overclocked to
4.5/9.0 GHz at 1.4V.

The discrepancy between 0.85V and 1.3V was caused by the end of Dennard
scaling. Basically transistors require much more voltage than predicted and
thus consume far more power than predicted. Although the transistors can
technically run at 9 GHz, the resulting power density is very difficult to
cool.

~~~
amelius
> Although the transistors can technically run at 9 GHz, the resulting power
> density is very difficult to cool.

But nowadays we have processors with multiple cores, where sometimes you need
only 1 core (and it needs to be fast). So would it be an idea to increase
clock frequency for those cores, but multiplex them quickly to allow them to
cool?

~~~
PaulHoule
Nope. It is not just thermals but also memory latency. If you have four cores
and each has two register files you can get 8x the bandwidth at the same
latency.

That 10GHz talk was a lie on the part of Intel to intimidate people away from
AMD, not only would a 10GHZ P4 melt down, but it would be stalled all the time
from memory latency. So many things did not work that it was not an honest
mistake.

Today there is talk of a big clock rate bump (to 200 GHZ or so) if they go to
a different semiconductor, but at that point you probably need a fiber optic
or terahertz wave link to memory to keep the pipeline full.

~~~
Sir_Substance
>Today there is talk of a big clock rate bump (to 200 GHZ or so) if they go to
a different semiconductor, but at that point you probably need a fiber optic
or terahertz wave link to memory to keep the pipeline full.

You talk as if there couldn't possibly be a benefit to an increase in speed
without a corresponding increase in memory bandwidth. Whilst it wouldn't be an
optimally efficient system, if we /could/ bump to 9GHz (or 200GHz), wouldn't
it be worth doing so for at least some kinds of calculations, even if the
memory can't keep up?

edit: Both responses were super-interesting. Don't wanna reply to both, but
thanks all :)

~~~
eslaught
There's a word for this: computational intensity, i.e. the ratio of useful
compute operations per memory load in an app.

Are there are apps that have high computational intensity? Sure, matrix
multiply is one of them. That's one of the reasons why dense linear algebra
serves as the standard benchmark to determine the top 500 supercomputers in
the world.

But even in HPC (high performance computing), many if not most apps actually
have relatively low computational intensity (i.e. in the range of one or so
compute operations per word of memory loaded). In this regime, it really
doesn't make sense to grow compute out of proportion with memory bandwidth
because you'll just be idling the processors.

And while I have no proof, I'd expect HPC applications to generally be more
computationally intense than general consumer computing tasks. So I'd expect
that computational intensity goes mostly down from here.

------
ChuckMcM
Ah yes, the 10Ghz CPU. Now if you run 6 cores at 2.5Ghz each[1] is the 15Ghz?
By the end of 2001 it was clear that Intel really needed to rethink things.
And ever since that time we've had a series of "linear projections" which have
come up short (process nodes, power levels, cpu power). Not to say we don't
have some pretty awesome toys to play with that can do a lot more than
machines of a decade ago could do, but every time someone takes an exponential
and extends it out into the future to make a point, I stop and ask "And what
if this is the end of this s-curve?" Nature hates an exponential almost as
much as she hates vacuums :-)

[1] [https://www.amazon.com/Intel-Xeon-2620-Processor-
BX80621E526...](https://www.amazon.com/Intel-Xeon-2620-Processor-
BX80621E52620/dp/B007H29ECE)

~~~
redcalx
There's an argument that when the current s-curve (sigmoid) starts to reach
its limit we start looking for other ways of obtaining improvements, such that
over time we have a series of sigmoids and overall exponential progress...
until the final sigmoid.

It reminds me of Malthus thinking the economy/world was doomed because we were
about to run out of trees, and then we started making significant use of coal
(coal had been used for a long time, but only on a small scale).

------
Strom
It's interesting that while we aren't at 10 GHz quite yet, the functionality
that was mentioned in the article [1][2] is here and works even on budget
smartphones.

\--

[1] _Imagine being able to speak normally with your computer as you would a
secretary sitting next to you and have your computer accurately and quickly
take notes from your speech._

[2] _Imagine logging onto your computer not via a user name and a password but
by sitting in front of your display and having it scan your face to figure out
if you are allowed access to the computer._

~~~
new299
Is this really the experience of voice recognition people have? I can't get
Siri to reliably create appointments... takes about 3 tries for the voice
recognition to work correctly.

~~~
Kiro
Definitely. Google Home picks up everything perfectly on the level that I'm
really surprised the few times it actually fails.

~~~
dx034
With Amazo nEcho this was exactly my experience for the first week. Since
then, I've learned where it fails and are becoming more annoyed by it. The
clear difference between current voice recognition and a real person is that
they don't adapt to the way you talk. They don't learn if they misunderstood
you once.

As an example, if I ask Alexa to play a certain kind of music and she plays
the wrong one, I'll have to specify it further to get the correct music. Next
time I'd expect her to get the clue, but she'll make the same mistake again.

Even though the most annoying thing is that chain commands don't really work
with any assistant. I'd expect to give 4-5 commands at once and get them
executed. Activating it for each command is very annoying.

------
Jedd
My first computer ran at a touch over 1MHz, and I've had a lifetime of needing
(sic) to upgrade my computer every couple of years to chase or improve upon
the same perceived performance levels.

It's refreshing that -- despite not having 10GHz CPU's available -- my 6yo
CPU/mobo still does not feel in any way slow or in need of upgrade.

Yes, other components have offset the relatively slow pace of CPU improvements
(GPU's for gamers, SSD's for everyone, etc) but it feels like we're enjoying a
lengthy era of 'good enough grunt' on the CPU front (for _most_ of us).

~~~
qball
>6yo CPU/mobo still does not feel in any way slow or in need of upgrade

The limiting factors have changed- or rather, the limiting computers have
changed.

Throwing more client horsepower or internet speed at the problem can't solve
the fact that the server (and to a limited extent, an internet connection) has
to deliver both a huge amount of JavaScript and the (typically
image/video/audio-heavy) content itself.

The client is always underutilized- the fact that smartphone apps (or rather,
cached local copies of what would normally be a website) have similar
performance to a modern desktop PC should speak volumes about that. It's
probably why Microsoft smartphone-ized Windows; though their execution of that
was awful and it didn't help that WinRT wasn't mature on release.

And raw server performance (like desktop performance) has been at a plateau
for a while now as well. The lack of competition for Intel doesn't help that (
_maybe_ AMD's new processor line will start driving improvements again, but
there are no hard numbers on performance; and higher-TDP ARM designs aren't
currently competitive with Intel's in this space either).

Until this changes, and there's nothing to indicate it will on Intel's
roadmaps, clients will continue to be good enough- it might be the first time
in history where computers (since 2008 or so) are replaced because the
hardware failed and not because they were insufficiently fast.

~~~
pmoriarty
_" it might be the first time in history where computers (since 2008 or so)
are replaced because the hardware failed and not because they were
insufficiently fast."_

You're forgetting two things:

1 - Games

2 - VR

Games especially have a very long history of having a bottomless appetite for
more and more powerful hardware every year as they push the limits of what's
possible.

VR is dramatically upping hardware requirements, and those requirements are
going to exponentially increase as consumers start to demand and expect 8k per
eye, full motion, high framerate, 360 degree, 3D, wide field of view,
interactive VR -- all on smaller and lighter headsets, ideally wirelessly.

~~~
qball
>Games especially have a very long history of having a bottomless appetite for
more and more powerful hardware every year as they push the limits of what's
possible.

Sure, though the CPU has less to do with that. For reference, most modern
games still perform just fine on first-gen i7s and second-gen i5s; the most
recent of those was released 6 years ago. GPUs are a replaceable part where
CPUs are not.

At least GPU technology is still advancing in big ways, though I think that's
more a property of how they're built, what their functions are, and
(especially for higher-end cards) what kind of power budget users are willing
to accept. It's unusual for the newest mid-range card not to match the
previous high-end card.

Intel, on the other hand, has never released a CPU with TDP over 150W (AMD had
a couple at 220W), even though most in the overclocking community know that
5GHz is regularly attainable on modern CPUs. But that that's been mostly true
since 2013.

------
NamTaf
I remember reading this news (and perhaps even this exact article - it
certainly feels familiar) back when it was released and being so excited as a
15 year old gaming nerd. It really was a time when stuff was changing so
rapidly - and those changes were resulting in massive, really obvious
performance gains - that you couldn't help but just look forward to where it
might go if the pace continued. Everything was about the big frequency numbers
and it was easy to buy into the hype.

It'd be interesting to know the sort of IPC delta between a Netburst
architecture and Kaby Lake. Also, it'd be interesting to know how
theoretically fast you'd have to push Netburst to see the same performance as
Kaby Lake.

~~~
dr_zoidberg
This is from 2011, but it should give you an idea:
[http://www.tomshardware.com/reviews/processor-
architecture-b...](http://www.tomshardware.com/reviews/processor-architecture-
benchmark,2974-15.html) .

tl;dr: Core i7-2600k @ 3GHz, is about 2,67x faster than a Pentium 4 HT 660
(Prescott ua) @ 3Ghz. In the benchmark they tested single core, and equal
clock, to keep everything as level as posible.

As for single core performance, Kaby Lake would be between 30 to 40% better
than Sandy Bridge (which could be called the peak improvement architecture in
the Core i brand), so it'd be about ~3,5x faster than the Prescott based
chips.

------
dottrap
I'm surprised Intel made this bold claim so late. When I was in college in the
late 90's, my computer architecture prof said the physics problems get really
hard starting around 4 to 8 Ghz. He said it as it was common knowledge in the
industry.

~~~
rconti
Damn. The linked article says nothing about Intel claiming 10ghz, and the
author is just speculating that, applying a 9x speed increase to Netburst (the
same 9x they saw with the P6), 10ghz should be doable. But then the link at
the bottom, "FACTS FROM INTEL" specifically mentions 10ghz, presumably from
the horse's mouth.

------
antognini
Just to give a little intuition for exactly how fast your CPU runs (assuming
~3 GHz), a single cycle takes about as much time as it takes a photon to
travel from your monitor to your eyeball.

~~~
theandrewbailey
299,792,458 meters/second ÷ 3,000,000,000/second = 9.99 cm

Just how close are you sitting to your monitor?

~~~
antognini
Perhaps I need glasses. :)

------
matt_wulfeck
The funny part is I'm reading this article almost 2 decades later, on a CPU
that's probably throttling itself down to the 1.4 ghz level... the top of the
line then.

~~~
Cthulhu_
And yet, it's probably over a factor 10 faster and a factor 10 more efficient.
(random numbers, I don't actually know how cpus have scaled or if they can
accurately be compared)

------
PhasmaFelis
Anecdotally, I've heard that Everquest 2 (released 2004) is still impossible
to run at 60FPS with all graphical settings maxed; the devs attempted to
future-proof it with speculative high-end graphics options, but they optimized
for high-GHz processors which never arrived.

~~~
rangibaby
No single GPU could run Crysis (2007) at 1080p / 60fps until last year!

------
userbinator
Over a decade later, even overclockers haven't managed to reach 10GHz --- but
some have come close:

[http://www.tomshardware.com/news/amd-
fx-8150-overclock-9ghz-...](http://www.tomshardware.com/news/amd-
fx-8150-overclock-9ghz-bulldozer,15853.html)

At 10GHz, light travels approximately 3cm between each period of the clock.

Note that transistors which operate above 10GHz are not rare and used in
microwave applications; as I understand it, the difficulty is in creating
logic circuits with them and at a scale suitable for a CPU.

~~~
bhouston
> 10GHz, light travels approximately 3cm between each period of the clock

Light doesn't change speed, so this statement confuses me.

~~~
mbell
> Light doesn't change speed, so this statement confuses me.

Practically speaking, it does change speed. The speed of light in a vacuum is
fixed, but it varies based on what it's traveling through.

The propagation velocity of an EM signal in RG coax cable is 80% of c*, PCB
traces can be as low as 50% and somewhat surprisingly both fiber optic cable
and cat6 cable are about the same, ~60-70%. I don't know if there is any good
public information about the velocity factor with modern CPU transmission
lines.

------
aristidb
I see three main failed predictions:

    
    
      - 10 GHz
      - long pipelines, I think we're currently at around 14 stages, which is more than P6 but less than Netburst (Pentium 4)
      - EUV lithography is still not a thing at 10 nm
    

Die I miss any important failed predictions?

------
malikNF
Slightly off topic, this reminded me of the whole over-clocking scene in the
2000's, being excited over these things. :)

[http://valid.canardpc.com/records.php](http://valid.canardpc.com/records.php)

[https://www.youtube.com/watch?v=UKN4VMOenNM](https://www.youtube.com/watch?v=UKN4VMOenNM)
(the world record at 8.429GHz)

------
wapz
> Realistically speaking, we should be able to see NetBurst based processors
> reach somewhere between 8 – 10GHz in the next five years before the
> architecture is replaced yet again

was a statement made by anandtech. Did Intel actually strive for those
numbers?

~~~
gregoryrueda
Haven't we reached the point of diminishing returns regarding processor speed?
I believe this to be true for the consumer market.

~~~
2muchcoffeeman
Why would that be the case? If we could run CPUs at 50Ghz with no problems why
wouldn't we?

Edit: I probably exaggerate. I just mean that higher clock speed still have
benefits.

~~~
Zombieball
I think this "conveyor example" from Intel is a good explanation as to why:
[https://software.intel.com/en-us/blogs/2014/02/19/why-has-
cp...](https://software.intel.com/en-us/blogs/2014/02/19/why-has-cpu-
frequency-ceased-to-grow)

I think that you would cease to see any acceleration from increased clock
speed unless you also sped up other parts of the processor / computer.

I think your sentiment is correct. If we can take advantage of higher clock
speeds, then why not do it?

~~~
krylon
Reminds me of something Seymour Cray (allegedly) once said: "Anyone can build
a fast processor - the trick is to build a fast computer around it". Given the
performance boost an SSD gives, I imagine there's some room left for improving
overall system architecture without increasing the clock speed of the CPU or
replacing the CPU at all.

(Caveat lector: Quoting from memory here)

------
moogly
Interesting. One day, maybe one day, Intel will release a desktop processor
that is worth upgrading to from my 6 year old 2600K. One that doesn't cost
$1723, that is.

~~~
Zachery
Define worth?

They have released a process that is worth upgrading to:
[https://ark.intel.com/products/97129/Intel-
Core-i7-7700K-Pro...](https://ark.intel.com/products/97129/Intel-
Core-i7-7700K-Processor-8M-Cache-up-to-4_50-GHz)

[https://ark.intel.com/products/52214/Intel-
Core-i7-2600K-Pro...](https://ark.intel.com/products/52214/Intel-
Core-i7-2600K-Processor-8M-Cache-up-to-3_80-GHz)

There are a massive host of improvements between those two processors. Each
release since Sandy Bridge has continued to increase performance between 5 to
10% in most tasks and more in some specifics. Over 4 releases that is a
noticeable effect.

That processor in the US is 350 Dollars.

------
kevin2r
So, what was the reason CPUs could not scale vertically (higher frequency)?
Temperature? Stability?

~~~
iamaaditya
Technically it was heat but mostly it was due to (i) Economics, there was less
demand for faster clock speed. Otherwise more research could have gone towards
solving heat problem. (ii) Each cycle of CPU was more efficient with ability
to execute multiple instructions in a single cycle and with more efficient
instruction sets.

Surprisingly, power consumption also made huge impact. As tablets and laptops
got more popular than desktop battery life became a major concern and thus TDP
played major role in research.

Try this fun experiment: Underclock your CPU by half a GHz and see if you
notice the difference in your day to day work.

~~~
Baeocystin
Single-thread performance is as important as it has ever been.

That a secretary typing a document or someone who only spends time on facebook
doesn't notice the difference is irrelevant- consider, for example, the
massive capital outlay by the financial industry to have servers located as
closely to the world's trading hubs as possible. If they are willing to pay
whatever it takes to shave milliseconds off a round trip, faster CPUs are a
part of that equation.

~~~
krylon
> faster CPUs are a part of that equation.

I think the GP did not debate that but pointed out the for CPU
speed/throughput, clock speed is only part of it. Adding functional units and
allowing the CPU to process more instructions in parallel can have a big
impact, so can e.g. larger cache, better branch prediction and so forth.

If you give people faster CPUs, they will cheer and find something to keep
them busy. ;-) And for some people, there is no such thing as "fast enough".
But for a fairly large share of desktop/mobile users, the is not the limiting
factor as much as memory bandwidth and I/O.

~~~
Baeocystin
I don't disagree with that statement in a general sense. But what earns Intel
its money and marketplace dominance? The cheap Celeron/Pentium-class chips
sold in bargain laptops & Best Buy specials? Or the high-end, single-thread
performance chips?

------
willholloway
The current generation of Intel chips can be overclocked comfortably to 5ghz
with a simple water cooling setup.

~~~
pmoriarty
You only have to risk frying your computer and destroying your data if the
water cooler springs a leak.

------
wscott
Glad to see the correct answer at the top here. The "fireball" section of the
P4 did run at double the frequency as the rest of the processor. So internally
that was considered the processor frequency.

One of my first contributions to the Linux kernel was a bugfix to the bogomips
routine. It stored the result in a 32-bit variable and our 8+ GHz chilled test
machines would cause that to wrap.

It was then determined that the processor would be marketed using the slow
clock frequency. This was the right answer, but it didn't feel like it at the
time.

That article predates the marketing change. It might have made them realized
they needed that change.

------
ksec
We may have to move beyond Silicon to get 10Ghz.

For General Purpose IPC Single threaded performance, we basically had little
to no improvement since Sandy Bridge apart from Clock Speed. SSD / IO Speed
helped to fill the performance improvement gap in the past 4 - 5 years.

Now we are waiting for the next big improvement to come. If there are any.

------
richardboegli
Best so far is 8.7Ghz with AMD chip which is a bit old now. But Kaby Lake is
7.3Ghz. All using liquid nitrogen of course.
[http://valid.canardpc.com/records.php#js-
freq_all](http://valid.canardpc.com/records.php#js-freq_all)

------
jlebrech
I've always wondered why the need for ghz, isn't mips what you're looking for.
wouldn't a 50mhz 1000 core cpu do well with decent parallelism in a compiler?

how many cycles does the average method/function/procedure need anyway?

~~~
chriswarbo
The standard response is to quote Amdahl's Law
[https://en.wikipedia.org/wiki/Amdahl's_law](https://en.wikipedia.org/wiki/Amdahl's_law)

If 90% of your runtime can be done in parallel, you still have to wait for
that last 10%. You can hit this limit with 10 cores (1 processing the 10%
that's serial, the other 9 processing the 90% in parallel). If you throw 1000
cores at the problem, you'll have 999 cores processing the 90% in parallel,
each performing 0.09% of the workload, but you'll still have 1 core doing the
10% that's serial. Those 999 cores will be idle for 99% of the time, waiting
for that last core to finish.

The counter-argument is Gustafson's Law
[https://en.wikipedia.org/wiki/Gustafson%27s_law](https://en.wikipedia.org/wiki/Gustafson%27s_law)

This says that people don't choose a particular task, then wait for a computer
to do it. Instead, the choice of which task to perform depends on what the
computer can manage. Hence the user of a 1000 core machine will choose to do
different tasks than the user of a 10 core machine, or a 1 core machine.

Whilst Gustafson's Law is clear from experience (a PlayStation 4 isn't used to
run Pacman _really fast_ ), Amdahl's Law is the one that's relevant for
compilers: a "sufficiently smart compiler" can alter your code in all sorts of
ways, but the resulting executable must still perform the same task (otherwise
it's a bug!).

There might be an approach based on e.g. writing an abstract specification and
_deriving_ a program which is suitable for the given hardware, but that's a
long way off (for non-trivial tasks, at least).

------
pmoriarty
What ever happened to quantum computing, DNA computing, and optical computing?

Are consumers ever going to see a general purpose computer based on any of
those technologies on their desktops?

~~~
Analemma_
> Quantum computing

Still gradually in development, presumably coming someday. It's very difficult
to get quantum systems of any real complexity to not decohere before they can
do useful computation.

> DNA computing

DNA-based storage apparently exists in the lab and might exist someday:
[http://arstechnica.com/information-
technology/2016/04/micros...](http://arstechnica.com/information-
technology/2016/04/microsoft-experiments-with-dna-storage-1000000000-tb-in-a-
gram/). Current costs are something like $40 million/GB, so that'll have to
come down "a bit". I don't know of anyone doing computation with DNA, biology
seems too slow for that.

> Optical computing

It's very difficult to make light interact with other light (which is _sine
qua non_ for computation) except inside a material with electrons, where you
get significant losses and there's no real advantage over just plain old
transistors. Not likely to become a useful technology IMO.

------
wisienkas
It is about having vision really. Appreciate their vision from back then, and
embrace what they have achieved so far

------
sontakey
Well, where can I get mine?

------
ChristianGeek
Well, this would be less then, wouldn't it?

------
taylorh140
Intel can't beat the heat.

