
AMD Unleashes First-Ever 5 GHz Processor - jve
http://www.amd.com/us/press-releases/Pages/amd-unleashes-2013jun11.aspx
======
icegreentea
This is pretty important for AMD because they've been having a terrible time
matching Intel at per cycle efficiency. Basically, ever since the core i
series started, AMD has been behind at single threaded performance, especially
when going clock for clock. They've been trying to make up for it by both
offering more cores/threads than comparably priced Intel components, as well
as trying to scale their clock rates. Unfortunately for AMD, their last few
generations have actually been falling short on their attempts to boost clock
rates, as a result, when comparing comparably priced AMD and Intel chips,
while AMD would typically have a clock speed advantage, it would often not be
enough to overcome Intel's efficiency.

This product release is kind of an attempt to show that AMD can actually
deliver on their planned strategy.

As for why AMD is going for this route, rather than trying to beat Intel in
per clock efficiency? Probably because AMD's resources are severely limited
compared to Intel, and this approach offered lower risk at lower cost.

~~~
Freaky
> As for why AMD is going for this route, rather than trying to beat Intel in
> per clock efficiency?

Well, because clock speeds are something they can improve _now_ , and not in
$n years time when their next major microarchitecture is ready. Intel had the
exact same problem with the Pentium 4, and they were similarly stuck with
minor tweaks and desperately increasing clock rates for years before Core was
ready.

~~~
Lagged2Death
I wish I had a cite for this.

It was my understanding that P-IV was originally conceived as an architecture
that could be clocked up and up for years to come. Developing a new
architecture is expensive and risky (see P-IV, Itanium), so the hope was to
design something that would scale up so well as manufacturing improved that a
few generations of architectures could be skipped, so to speak.

They had hoped the P-IV would eventually reach 10GHz or so. Which made it OK
that P-IV retried fewer instructions per clock than the P-III that came
before. Scaling up like that isn't such a radical idea; the P6/i686
architecture behind the Pentium Pro, Pentium II and Pentium III had spanned a
spectrum from 150MHz to 1.4GHz, after all, nearly an order of magnitude.

But it turned out that somewhere between 3 and 4 GHz, things got really
difficult.

"Minor tweaks and desperately increasing clock rates" was more or less the
P-IV plan from the get go. It just turned out not to work.

~~~
yk
Anandtech discussed this in their Bulldozer review:

    
    
        AMD's architects called this pursuit a low gate count 
        per pipeline stage design. By reducing the number of 
        gates per pipeline stage, you reduce the time spent in 
        each stage and can increase the overall frequency of 
        the processor. If this sounds familiar, it's because 
        Intel used similar logic in the creation of the Pentium 4.
    

[http://www.anandtech.com/show/4955/the-bulldozer-review-
amd-...](http://www.anandtech.com/show/4955/the-bulldozer-review-amd-
fx8150-tested/3)

And since I read this, I really wondered what trick AMD has up its sleeves.

~~~
VladRussian2
there is no trick, just desperate attempts to survive.

10 years ago AMD had more efficient architecture, Intel - more GHz (remember
2.2 Athlons having 3700 "PR-rating"?). Intel's approach was commercially more
successful - customers were still buying GHz, and AMD was trying to educate
market about real performance while radiating impression of a looser who just
can't get good process and fabs. AMD gave up and decided to pursue P4-like
approach for their new architecture, while Intel hit GHz "sound barrier" and
went efficiency way by resurrecting PIII style architecture which resulted in
Core CPUs. AMD made a huge, strategic mistake 10 years ago. How the execs in
charge of Bulldozer have been BS-ing their way inside AMD last 5 years - that
is a typical everyday miracle of a big company internal life.

~~~
tacticus
There is also the billions intel spent making sure dell and hp and co. would
never buy AMD

~~~
zanny
I actually take notice of how many consumer laptops I see in a walmart /
target / best buy are actually running AMD APUs. The big brands might not, but
they see bigger numbers on the A6 than a Pentium and feel better about the buy
even if the Pentium dominates it.

~~~
AnthonyMouse
The trouble is that most computers aren't bought at Best Buy or Walmart, and
the ones that are tend to be the low end garbage with no margins for the
hardware vendor.

Keeping AMD out of Dell and HP is what kept them out of corporate America.
Corporations literally buy PCs by the pallet, and then they pass on the volume
discount to employees who want to buy one for home.

~~~
csense
> they pass on the volume discount to employees who want to buy one for home

Really? I don't see the use case for ordering a PC this way for the home. When
I'm buying a home PC, or recommending one for others, it's either:

(a) A bottom-of-the-barrel PC. As long as it has 1GB of memory and more than
one core, you can use it for web browsing, Youtube, email, and word
processing. This is what non-techies usually want (but they don't know they
want it and may get upsold by good marketing). This is what _I_ want unless
I'm planning on running a specific application that requires more.

(b) A powerful PC for gaming. It needs a decent discrete GPU if it's going to
play current games. Most office PC's don't have one, unless you work for
Pixar.

AFAIK the machines purchased by corporations for general office use are
usually middle-of-the-road beasts that cost more than category (a) but don't
have the discrete GPU of category (b). I'd be guessing they'd be a waste of
money for home use, even after the volume discount.

------
trotsky
Based on how bulldozer performed, it'll end up that sure it's 5ghz, but did we
mention all our instructions take two times the number of clocks now?

Virtualiztion host performance on our 8 core bulldozer (esx 5.1, private kbs
from vmware to try to help, 32gb ram, rad10 zfs san) was so bad (think p4 era)
that we finally tracked down how to force the cpu into only using 4 cores, one
per real fp core.

The reality is that there is no mainstream scheduler out that that can
efficienty use cores set up like that, especially with the long pipelines. I'm
not sure it can't be done, but what improvements have been made have been
minimal, or just in an academic/not a real os situation.

That's why Intel ships a compiler, duh.

It is true that the # one thing holding that part back was the raw clock speed
(as long as you view it more like a 4 core, 8 thread part ala Intel), but i've
gone back to speccing intel - it's just not worth being that much of a ginae
pig for a firm thats basically trying to scrape by until the arm64 parts start
getting stamped.

~~~
sliverstorm
_we finally tracked down how to force the cpu into only using 4 cores_

Was the issue the shared fp units, or the turbo-core? I wonder if you can
disable the turbo-core?

~~~
mitchty
If it was anything like the Niagra processors the shared fp units normally are
a bottleneck for fp. But the larger problem was the register
remapping/pipelines. They were fast, if you were running certain workloads.
God help you if you had to compress anything on those systems. Without pbzip2
or pigz it took forever. Really bad example but bulldozer seemed way too
niagraish to me based on its goals.

~~~
AnthonyMouse
Running threaded floating point workloads on bulldozer-derived architectures
is just folly. If you have parallel floating point code you should in general
be running it on a GPU.

~~~
trotsky
These weren't fp intensive workloads at all - mostly your typical IT IO
workloads. I don't know the internals to say exactly what or why, but
something seems to go really wrong on bulldozer when you try to schedule two
different vms on the same coupled pair of cores.

~~~
AnthonyMouse
It's because they're not independent cores. You're pretty much never going to
get the same single-thread performance with two threads running on a module as
with one, the idea is that you ought to get better than 0.5X the single thread
performance, such that if you have two threads then 2*0.75X is better than X,
while still allowing you to get X (or better with turbo) on strictly single
threaded workloads.

Where this can fall apart is if you're trying to use eight homogenous threads
at once and the threads have large working set sizes, such that the second
thread causes spill out from the per-module caches. Then you have eight
threads contending for L3 bandwidth, or if you're really screwed you fill up
the L3 and start to hit main memory.

Out of curiosity, have you tried any of the Abu Dhabi Opterons? They doubled
the L3 from 8MB to 2x8MB, which I would expect to help by both keeping you out
of main memory and reducing contention by splitting each L3 between half as
many cores (assuming you don't get the new twice-as-many-cores models).

------
e12e
Some comments from Anandtech:

[http://www.anandtech.com/show/7066/amd-announces-
fx9590-and-...](http://www.anandtech.com/show/7066/amd-announces-fx9590-and-
fx9370-return-of-the-ghz-race)

Based on an old review:

[http://www.anandtech.com/show/6396/the-vishera-review-amd-
fx...](http://www.anandtech.com/show/6396/the-vishera-review-amd-
fx8350-fx8320-fx6300-and-fx4300-tested/)

and single thread performance:

[http://www.anandtech.com/show/6396/the-vishera-review-amd-
fx...](http://www.anandtech.com/show/6396/the-vishera-review-amd-
fx8350-fx8320-fx6300-and-fx4300-tested/4)

If single threaded scales linearly with turbo frequency (and it looks like it
might):

The FX8320 (turbo boost 4.0Ghz) scores 240.7, while the FX8350 (turbo boost
4.2Ghz) scores 252.1:

The difference aligns quite nicely: (240.7/4) _4.2~252.74

And for 5Ghz should give about: (240.7/4)_5~300.88

This is still lower than intel's i5 3570k (302.2 - Turbo 3.8Ghz) and i7 3770k
(312.4 - Turbo 3.9Ghz)

And Haswell has even higher performance:

[http://www.anandtech.com/show/7003/the-haswell-review-
intel-...](http://www.anandtech.com/show/7003/the-haswell-review-intel-
core-i74770k-i54560k-tested)

~~~
AnthonyMouse
You're comparing the performance of the respective top of the line models.
That only matters for bragging rights. Most people don't buy that one, which
leaves AMD open to sell chips to people who would have bought midrange Intel
chips -- or people who are willing to sacrifice 312.4/300.88 -> ~3.8%
performance (which is almost certainly within the margin of error) in order to
keep competition alive or because AMD offers a lower price.

~~~
e12e
Well, I just wanted to guess at what that 5Ghz announcement actually would
mean in terms of performance.

I'm not convinced ~11 pixels per second is within the margin of error (the
numbers were from the single threaded povray test) -- but 3.8% difference
certainly mean very little in the real world. I'd guess it falls within the
bracket that is measurable but ignorable ;-)

Also, the "5Ghz chip" will most certainly be AMDs top of the line model?

------
TallGuyShort
IBM shipped a 5.2 GHz chip for it's mainframes over 2 years ago. This is not
the first.

~~~
CountHackulus
Exactly, this is the first ever 5GHz x86 chip. The title is factually
inaccurate.

~~~
ignostic
That's not surprising, given the title comes from and links to a _press
release_.

------
drcode
Does it really qualify as a 5 GHz processor if it only runs at that speed in
Turbo mode? (which I assume can only kick in for a few milliseconds...) What
is the "normal" speed that it can maintain for more reasonable time periods?
How come this isn't mentioned in the press release?

~~~
Retric
Turbo mode is limited by heat so with cooling you can stay in turbo mode.

PS: Intel and Nvidea do the same thing.
[http://www.techarp.com/showarticle.aspx?artno=745&pgno=1](http://www.techarp.com/showarticle.aspx?artno=745&pgno=1)

~~~
api
If older chips from Intel and AMD could be overclocked to twice their baseline
rates with high-end cooling, I wonder what you could clock this beast up to?

And yes there is a market for this. There are certain workloads that are
simply not parallelizable -- they're linear chains of dependencies where the
output of the first process goes into the second and so on and each step
depends on all N-1 steps.

~~~
DanBC
Here's a nice example of a challenge that is very hard to parallelize
([http://people.csail.mit.edu/rivest/lcs35-puzzle-
description....](http://people.csail.mit.edu/rivest/lcs35-puzzle-
description.txt)) ([http://crypto.stackexchange.com/questions/5831/what-is-
the-p...](http://crypto.stackexchange.com/questions/5831/what-is-the-progress-
on-the-mit-lcs35-time-capsule-crypto-puzzle))

([https://people.csail.mit.edu/rivest/pubs/RSW96.pdf](https://people.csail.mit.edu/rivest/pubs/RSW96.pdf))

------
rbanffy
Wasn't IBM's POWER in this range already?

~~~
profquail
Yes, according to Wikipedia, IBM was shipping 5.0GHz POWER6 processors by
2008:
[https://en.wikipedia.org/wiki/POWER6](https://en.wikipedia.org/wiki/POWER6)

The article cited in the Wikipedia entry is dated 08-Apr-2008:
[http://www.theregister.co.uk/2008/04/08/ibm_595_water/](http://www.theregister.co.uk/2008/04/08/ibm_595_water/)

~~~
apaprocki
Don't let the numbers fool you, though. POWER6 achieved those speeds due to a
change in chip architecture that actually wound up making them worse
processors than the much lower clocked POWER5s in a lot of situations. IBM
reversed course and changed the chip architecture back for the POWER7s, which
are clocked lower and outperform the POWER6s.

~~~
bretpiatt
I'm not an expert here but it looks like IBM is back up to shipping 5.5Ghz
chips in the EC12 as of December 2012. Are these still POWER6 chips as an
option or fixed and faster POWER7?

PDF Redbook reference:
[http://www.redbooks.ibm.com/redbooks/pdfs/sg248050.pdf](http://www.redbooks.ibm.com/redbooks/pdfs/sg248050.pdf)

~~~
apaprocki
As sibling mentions, they're not POWER. The max Ghz for each rev is POWER5
2.3Ghz, POWER6 5Ghz, POWER7 4Ghz.

~~~
filereaper
apaprocki, you've been absolutely spot on.

Highest clocking POWER processor offered is the 4.42 Ghz P7+ System p 780:
[http://www-03.ibm.com/systems/power/hardware/780/specs.html](http://www-03.ibm.com/systems/power/hardware/780/specs.html)

And yea, POWER 6's design was a bad decision.

------
userulluipeste
Q: What will a more powerful CPU core do in a comparatively slow environment
that is the rest of the computer?

A: Wait faster!

~~~
bcoates
A lot of that comparative slowness was caused by mechanical storage, replacing
it with parallel and lower-latency SSDs makes it a lot easier to get full use
out of more and faster cores. It doesn't cost much to set up a system that's
completely CPU-limited on OLAP database-like workloads these days.

I suspect that as more software stops being optimized for ~10ms serial disk
I/O with huge caches this will become more common and more and faster cores
will be a big(er) deal.

------
venomsnake
Can you hear it - the shrieks of all those D-14 and Phantex-s screaming.

I would like to see review though. And pricing. If it has decent single thread
performance and that number of cores with all next gen games being
multhithreaded by default it could be a compelling processor if it is in the
4770 price range.

~~~
zanny
Not really, because the AMD 8 physical core layout is still 4 packages of 2
register sets sharing ALU and FPU. It is exactly like a hyperthreaded Intel
part, but Intel has much better per clock performance.

That, and I see next gen games more favoring openCL / GL 4.3 compute shaders
to offload all their parallel workloads than to aggressively optimize for
greater than 4 core processors. Your returns on moving traditionally CPU bound
workloads (per agent logic, path finding, collision detection) to compute
class GPUs (when available, with the cpu fallback for now) gives you
significantly more returns than optimizing for the CPU.

Also, you can take a 4770k to near 5ghz on air. This part is already pushing
the thermal limits of the Bulldozer architecture, AMD is just fabbing them out
this high speed because they are floundering in this low-per clock performance
rut the entire architecture put them in.

Now, I would point any budget oriented gamer to the 4 or 6 core AMD models
around $120 - 130, because since they are all unlocked, you can get real
performance gains (but terrible power efficiency) over Intel parts below the 4
core unlocked part they put out each generation. Since they are effectively 2
/ 3 module parts, they are well suited for the next gen of GPGPU everything in
the engine and let the CPU do control flow.

If you even approach $200, the performance gains from jumping from any non-K
part to a 4.8ghz 4570k are huge, and that alone outclasses every AMD cpu for
gaming, but does trade blows on some titles with the 8 core parts.

~~~
e12e
It'll be interesting to see if AMD cornering both the new xbox and ps4 has an
effect on game engines for the pc as well -- specifically if the tuning that
probably will go into console versions will translate to pc -- and whether or
not Intel (and nvidia) will end up being penalized as a result.

------
deepblueq
The problem is that clock doesn't really mean anything concrete in terms of
real world performance. It's strictly a marketing thing.

For an example, what if a chip used a 10 GHz clock for distribution, and
divided it down to 5 GHz everywhere it was actually used (not that I know of
any reason to do such a thing besides marketing). Would it be marketable as a
10 GHz chip? The manufacturer would certainly be in hot water if enthusiasts
ever found out...

Even without such contrived scenarios, CPUs get different amounts of stuff
done per clock.

Something I keep seeing, even on Slashdot and Hacker News, is the idea that a
CPU that has to clock higher for a given performance will use more power. It
seems to me that if you've got double the clock, the likely explanation is
that half the transistors are switching per clock, and power consumption
should be orthogonal to clock/IPC ratio.

If anyone's got any contrary ideas on that, I'd love to hear them. All I can
think of is that higher clocks would correlate with longer pipelines, but
bulldozer's pipeline isn't even that long.

~~~
VLM
"is the idea that a CPU that has to clock higher for a given performance will
use more power."

This is like a dog whistle to the EEs, they're going to get all riled up by
programmers with screwdrivers. You can model a stereotypical FET gate as a
capacitor, all you're really doing is charging and discharging capacitors
either in FET gates or the transmission line theoretical capacitance. Right
out of the C=Q/V definition of what capacitance is, mushed up against some
ohms law and some algebra, and you end up with P=C times V squared times F. So
you can see the intense excitement in lowering core voltages, making gates and
lines smaller (lowering C) all in a tradeoff to improve the P/F or F/P
(whatever) ratio.

The important part is its pretty easy, right outta ohms law and the def of
what capacitance is, power is directly proportional to frequency.

~~~
tbrownaw
_The important part is its pretty easy, right outta ohms law and the def of
what capacitance is, power is directly proportional to frequency._

There's also the fact that your transistors have a particular voltage that
they switch state at, which means that they switch faster if you drive the
gate/line capacitance with a higher voltage.

Which means that chips _designed_ for lower frequencies can be designed to use
lower voltages, which can save far more power than what would be directly
proportional to the lower frequency.

~~~
VLM
"which can save far more power"

yes, right out of the equation provided.

In "CS" terms that may be better understood on HN than "EE" terms, electrical
power scales O(n squared) with voltage and O(n) with frequency.

If you really wanna get people riled up and talking you can roll out the old
power "EE" stuff about maximum power transfer happening when source and sink
impedance are the same, and you want to get the most bang for your buck so
you'd like that, right, and a transistor gate being near infinite resistance
would imply ... Or if you like to think about interconnects being signal to
noise level limited, then an RF analysis about noise voltage across a resistor
vs preamp noise figure vs current bias from a communications standpoint would
imply... But it turns out in practice most of the time, the first mental model
is by far the most effective way to look at it compared to these.

------
tibbon
I remember like ~10 years ago on Slashdot some people overclocking to 7-8ghz.
Of course this was on single core chips, but we've really pretty much
completely stalled on the mhz progression haven't we?

~~~
jmngomes
Also, I think there are some physical limitations that keep chips below a
certain clock speed. Besides, the bet has been on "smarter instead of faster",
i.e. producing chips that suit our computing needs, which are more adequately
supported by parallel processing.

High performance cores are useful for problems that are hard to paralellize,
but so far it seems that the breakthrough only occurs when a new approach to
the problem makes it feasible on multiprocessing platforms (e.g. graph
processing is hard to paralellize due to dependencies among graph nodes,
Pregel and similar offer a different approach)...a 50GHz CPU won't save you if
you need to process a huge graph (i.e. billions of nodes) on a single thread,
it'll always take a lot of time.

As to the "record", I think IBM already had a Series Z that is over 5GHz.

~~~
sliverstorm
_I think there are some physical limitations that keep chips below a certain
clock speed._

Not _hard_ limits, but yes, to my knowledge it is primarily physics that keep
chips where they are. The requirements for power and heat dissipation start to
balloon.

------
louthy
> "FX-9590: Eight “Piledriver” cores, 5 GHz Max Turbo "

Ridiculous name. Maybe if they put a 'Go faster stripe' on the top of the chip
people will believe it goes even faster!

~~~
SteveTickle
"FX", "PileDriver", "Max", "Turbo" ??? Whats next? obviously, "Super", "Mega",
"Ultra", "Extra"?

------
Stolpe
In other news: PCs now even more versatile. Now also replace radiators.

~~~
venomsnake
It has been like that for a while ... in 2000-ish we were calling Athlon
Kotlon (Kotlon is stove in bulgarian)

~~~
freehunter
In the mid 2000s I was running a Prescott oven in my room.

~~~
sliverstorm
Just wrap that computer case in tin foil, and you got a makeshift oven!

------
kayoone
I wonder how two of these (16 cores) compare against the new mac pro with the
best CPU in software that benefits from many cores like 3D rendering and such
or virtualization. Id bet they are close while the AMD only costs a fraction
of the Xeon. I know its not a fair comparision since the FX-9000 is not a
workstation cpu, but still...

~~~
bluedino
The multi-threaded Pov-Ray and Cinebench tests are just about the only two
benchmarks where the AMD 8-cores beat the i7 2600k, and just barely beats it.

The Intel chips soundly win in anything else (encoding, Photoshop...), and by
almost 2X in some of the single-threaded tests.

------
Everlag
Yes AMD, we all know you like your big numbers like core counts and clock
speed. It'd, however, be just excellent if you could put out a product whose
single threaded performance isn't garbage. I mean, thubans are beating your
newest and greatest!

But at least you can say you got the bigger cache, clockspeed, core count, and
debt than intel.

------
yason
In the end it's not the frequencies nor number of cores but performance per
watt that matters.

Most computers run on batteries these days, and those that don't drain ever
more expensive electricity from the wall socket and at the same time waste a
lot of it producing huge amounts of heat.

The more you get out of a watt the better. You can either trade in speed for
lower power or trade in power for better performance, but in either case you
want the performance/watt ratio to be the highest.

I would guess the power consumption of running the chip at 5GHz is pretty
high. And running temperatures as well. And yet there are fewer and fewer of
those huge tasks that you can only do with one core.

~~~
sliverstorm
_In the end it 's not the frequencies nor number of cores but performance per
watt that matters._

It depends on the workload, really. It should already be obvious that this
part is not meant to be a Joe Everyman processor.

------
scotty79
Since Pentium 133 I never had Intel processor in desktop computer. I wanted
few times but AMD was always cheeper for the same speed. Sure fastest were
almost always the Intel ones but the additional bit of speed never justified
the price.

~~~
csense
My hyperthreaded [1] Core i7 makes kernel compile jobs fly.

[1]
[http://en.wikipedia.org/wiki/Hyperthreading](http://en.wikipedia.org/wiki/Hyperthreading)

~~~
scotty79
It just splits core into sort of two virtual cores. I'm not sure how that
could help. Have you check how disabling it influences compiling speed?

~~~
csense
> I'm not sure how that could help

Hyperthreading keeps two threads "hot" in each physical core. When one thread
is waiting on memory access, the core can do work on the other thread rather
than sitting idle. (Memory access isn't _that_ slow, so switches need to be
fast to capture those otherwise-wasted cycles, which is why this is a CPU
hardware feature rather than an OS-level software feature.)

Purely CPU-bound tasks [1] don't get any performance gains from HT. But almost
all real-world applications spend a lot of time reading and writing memory,
and memory access is pretty slow compared to CPU speeds, so in practice HT
helps (otherwise Intel wouldn't have bothered to develop it and put it on
their chips, which probably cost a lot of money).

> Have you check how disabling it influences compiling speed?

No. But I'd guess it would be substantially less than 100% speedup since they
aren't actual, physical cores; but substantially more than 0% speedup since
the compiler uses dozens or even low hundreds of megabytes of memory.

[1] By "CPU-bound" I mean register-to-register arithmetic. You might also be
able to get away with hitting the L1 cache, which is a few KB, without
triggering an HT context switch.

------
shawnz
How ironic that AMD are now suffering from the same thing that once gave them
the edge (that is, Pentium 4's overly aggressive clock speed roadmap and
lacklustre per-clock efficiency).

------
leeoniya
and here i was hoping never to see a high-clock-speed headline
again...(supercomputers excluded)

~~~
fchief
I guess their marketing was out of other ideas and just went back to the well
one more time. It has probably been 10 years now since I really considered CPU
clock speed as a factor when buying a computer.

------
IanChiles
I seem to recall reading that these processors would have a 220W TDP - which
makes the whole 5ghz thing much much less impressive...

~~~
Moto7451
I'm yet to find an actual source for that figure whenever it comes up. Is it
just some tech site comment section spitballing or did they actually disclose
the TDP?

~~~
VLM
If there's a dimensioned pix of the heatsink, a bored enough engineer could
calculate the theoretical degC/W rating of the heatsink, and given a
presumably constant deltaT, there's your wattage.

Doesn't have to be dimensioned that accurately. To a first approximation a 1%
error in surface area would be about a 1% error in TDP.

I'd like to see very high temp CPU technology. That would be an interesting,
challenging direction for hardware tech to move. A tiny lightweight 5 deg C/W
heatsink is plenty if you're allowed to run at, say, vacuum tube redhot glow
temperatures. I'm well aware of the solid state physics challenges of this,
that's why I think it would be very interesting to see if anyone could pull it
off.

~~~
marshray
I doubt that a heatsink on an engineering-sample test board would be sized
within a 1% margin. Seems more likely they'd be error generously on the side
of big.

------
nvmc
I've been waiting since my Athlon64 died for AMD to make a chip worth getting.
Them taking the brute force (P4) approach is not all that encouraging.

------
znowi
I think maybe AMD should move to niche markets and stop trying to compete with
Intel, giving how vast the gap is in technology and resources.

~~~
freehunter
If there's no one trying to compete with Intel, I would guess that gap would
close fairly quickly.

------
sigzero
I think there should be a "their" in there, somewhere.

------
Ziomislaw
yaay, I am eagerly awaiting for intel to catch up :) (so I could buy quality
stuff that does not hang or overheat)

------
Pherdnut
YAYYY!!! Okaynowwhydothebenchmarkssuck.

------
ck2
I wonder how far they will OC on air.

------
josephagoss
What's the MIPS for this CPU?

------
ogdoad
finally, angry birds will be totally responsive!

------
zmonkeyz
(for consumers)

