
Intel Core i9-7980XE and Core i9-7960X CPU Review Part 1: Workstation - pulse7
https://www.anandtech.com/show/11839/intel-core-i9-7980xe-and-core-i9-7960x-review
======
sverige
I'm glad AMD is competitive again. Yes, Intel just surpassed them, but at
twice the price. Plus, I suspect that competition pushes Intel into releasing
better chips quicker.

I'm not really part of the "prosumer" market (what a word!), since I don't
have a burning need for that much power, but it helps move prices lower in the
middle and bottom of the performance range when the top end improves.

~~~
kpil
The last time AMD was competitive, they delivered really good value for money.
Yet they were only available in the "enthusiast" market segment as Intel were
able to give all OEM strong incentives to not buy AMD.

Is there any chance that they are not having all OEMs by the balls this time
too?

~~~
jhasse
> Intel were able to give all OEM strong incentives to not buy AMD.

They were bribing and blackmailing OEMs.

------
dis-sys
Intel must be pretty desperate, according to the review article -

* Intel doubled the price of a threadripper 1950x for giving you a 8% increase on multithread performance (measure by Cinebench R15 scores).

* when Intel could't afford to match AMD high end CPU's 60 PCIE lanes, they invented the term "platform PCIE lanes" to include those from chipsets.

* when Intel couldn't match AMD's TDP, they came up with some different TDP definition, as a result the 165w TDP Intel 7980xe draws 15w more than a 180w threadripper 1950x.

Intel must have a pretty good propaganda department known as "marketing"?

for those who believe Intel is responding to AMD's challenge - no, it is not,
according to leaks, Intel is about to release i3-7360x, which is going to be
1.25% faster (yeah, 1.25% faster, no typo, source linked) than the previous
generation. when you can fully utilize your well established marketing
department to fool consumers to buy your overpriced slow processor with far
less cores and PCIE lanes, why bother giving those customers anything better?

[https://videocardz.com/72775/intel-preps-dual-
core-i3-7360x-...](https://videocardz.com/72775/intel-preps-dual-
core-i3-7360x-for-x299-but-why)

* why Intel released Q6600 10 years ago and made quad core mainstream? because AMD was competitive.

* why Intel repeatedly refused to make 8 cores mainstream in the last 10 years? because AMD was not competitive.

* why Intel finally made 6 cores mainstream 10 years after the release of Q6600? because AMD is competitive again.

~~~
onli
That's not too accurate:

> _when Intel couldn 't match AMD's TDP, they came up with some different TDP
> definition, as a result the 165w TDP Intel 7980xe draws 15w more than a 180w
> threadripper 1950x._

So far it was AMD which had a weird way to calculate the TDP, and it was them
that went over while with Intel TDP pretty much equaled max power draw. That
Intel now goes over is clearly a sign that they are stressing their boundaries
and the of their architecture, but it's not really a big distinction against
AMD.

~~~
ianhowson
Intel's TDP measures power consumption/heat production _in the intended
application_. The same silicon has a TDP of 5W, 15W or 35W depending on
whether it's going in an ultralight laptop, pro laptop or desktop.

All modern Intel CPUs will draw more than their rated TDP if you run them
hard.

~~~
Coding_Cat
And these are high end 'prosumer' parts. Running the CPU at peak 80-100% load
for a significant amount of time should be considered the intended
application.

~~~
ianhowson
What does 80-100% even _mean_ , though? The CPU cores are merely one component
of the CPU 'package'. The GPU on most Intel parts consumes way more power than
the CPU cores when running flat-out.

Even just within the CPU cores, different ops consume vastly different amount
of energy while still 'maxing out' utilisation. And we haven't even considered
accelerator hardware within the CPU which is totally independent of the CPU
cores but can still consume a large amount of energy (e.g. H.264, AES).

Intel no doubt calculate a maximum power consumption figure where every bit of
hardware on the CPU is consuming as much as it can. It'd be utterly
meaningless -- probably on the order of 100W for a 2-4 core laptop part or
400W for a server part.

You _can_ design a computer to handle those figures, but you don't want to. So
TDP reflects intended usage rather than peak demand.

If you're a 'prosumer', you're presumably smart enough to size your power
supply and cooling appropriately. TDP is not the number you're looking for.

~~~
Coding_Cat
Ah yes fair point, my personal bias is showing a bit. I would consider
numerics (AVX512 or AVX2) to be the main interest, and it is often what the
type of workstations I'm familiar with need. It also tends to consume the most
power when run at full load by a significant margin.

the 80-100% is bit of a vague number, I was just kinda referring to the cpu
utilization as reported by the OS. For numerics I would consider a realistic
full load 75%+ pipeline fill time on the AVX codepaths. With a lot of compute
you can get quite close to 100% quite quickly if your software is highly
optimized (and uses enough compute to hide the memory loads). TDP should just
be 100% pipelined load of the highest consuming codepath.

But my general point is still that these benchmarks should not exceed the TDP
for these types of parts. For everyday use these loads might not be realistic.
But for a lot of people looking to spend $2000(!) on a single CPU, these
benchmarks are indicative of real life usage.

------
smacktoward
Intel's staunch refusal to support ECC RAM on non-Xeon chips seems incredibly
chintzy at this price level.

Who is the customer that's simultaneously willing to spend $2,000 on a CPU and
so desperate to save a few bucks that they'd settle for non-ECC RAM? I want to
meet that person.

~~~
RubberSoul
Is there any research or even anecdote from people running lots of machines
that indicates how frequently corruption occurs?

Suppose I'm doing scientific computing, is ECC solving a super rare problem,
or something that might happen weekly or monthly?

~~~
ljcn
Some research yes. To answer your final question directly - more like hourly.

Google did a big study in 2009 [0] and found error rates between 2.4E-11 and
6.7E-11 errors/bit-hour, or, in more familiar units: about 1 error per 2-GByte
of memory per hour.

Obviously most errors go unnoticed, but for anything where random bit flips
are unacceptable, ECC is necessary.

Note too that error rates go up with memory density, not down, so things might
be worse with more recent chips.

[0]
[http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf](http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf)

~~~
api
> about 1 error per 2-GByte of memory per hour

Wow! So that means a 16GB laptop will on average experience 8 memory errors
per hour under moderate load? I had no idea it was that high.

This likely explains why my Mac crashes randomly but incredibly rarely. It's
like once every few months. I guess a bit flip happens at the wrong place
about that often?

~~~
taneliv
I would say _that_ bit flipped in much less wrong place than the one that
ended up corrupting some file on disk, unless you have a lot of insignificant
data stored.

------
nerdy
Unfortunately the 7980XE does not appear to be available for purchase yet :(

Intel's site says Q3 (which ends this month):
[https://ark.intel.com/products/126699/Intel-
Core-i9-7980XE-X...](https://ark.intel.com/products/126699/Intel-
Core-i9-7980XE-X-series-Processor-24_75M-Cache-up-to-4_20-GHz)

And in a blog post they appeared to indicate the 7980XE would be available
Sept 25 (at _least_ the 14-core version, which also hasn't yet appeared,
depending on parsing): [https://newsroom.intel.com/news/intel-unveils-full-
intel-cor...](https://newsroom.intel.com/news/intel-unveils-full-intel-core-x-
series-processor-family-specs-14-18-core-processors-available-starting-
september/)

~~~
CyberDildonics
There seems to be giant lag between the press releases of these chips and when
someone can actually buy them.

~~~
bryanlarsen
This press release wasn't supposed to go live for a while, but somebody broke
the embargo, forcing Intel's hand.

So the availability lag might be worse than normal.

------
GiorgioG
I recently put together a Threadripper PC. Overclocked to 4.0ghz (stable)
using a Noctua air cooler, my machine beats both these processors in the
multi-threaded CineBench 15 benchmark (3288cb vs 3281cb & 3083cb.) Single core
is a bit slower (161cb vs 189cb.)

This is my first AMD system in ages and I can't say I'm disappointed,
performance is great, value is better than Intel's offerings.

CineBench / UserBenchmark screenshots:
[https://ibb.co/mapRUQ](https://ibb.co/mapRUQ)

