
Intel Core i9-7900X review: The fastest chip in the world - hvo
https://arstechnica.com/gadgets/2017/07/intel-core-i9-fastest-chip-but-too-darn-expensive/
======
tracker1
Another issue beyond the fact that the X299 boards are problematic in terms of
how many, and/or what slots you can and can't use seems to be excessive heat
and stability in some of the reviews I've seen.

I'm still running a i7-4790K at home, and though I'd like something with more
cores... nothing is compelling enough to bring me to switch given the costs
involved. If I were building new, would most likely go with an AMD solution.

~~~
NamTaf
CPU gains for the last few gens have mainly been in perf/watt and, now, core
count in lower-class chips thanks to AMD. The biggest benefit of a system
upgrade for the last few gens, from a user perspective, has been for the
chipset's gains - NVMe, USB3, USB-C, etc. more than raw performance. That's
slowly trickled up but the real-world gains in many tasks haven't been enough
to really justify it.

I know many people still sitting on anywhere from a 2xxx gen to a 5xxx gen and
just don't feel compelled to upgrade from a CPU perspective. Those that
eventually do, do so for the motherboard features more than the CPU - that's
just a necessary cost for a small benefit.

This is marginally different for laptops where perf/watt becomes more
important, of course. However for desktops, I certainly wouldn't be troubled
by a 4xxx gen. I upgraded last year and it was from a 920 to a 6600K. Even the
920 did much of what I wanted, honestly, it was more a luxury upgrade.

~~~
chx
> NVMe

While NVMe benchmarks are stunning I really am curious whether in an ordinary
setting the difference to SATA can be felt at all. Not measured -- felt. When
the era of swap ended because CPUs / chipsets finally could handle enough RAM
and when we went from HDD to SDD, those could be felt for sure.

> This is marginally different for laptops

In laptop world, the problem is that with the spread of thin craze most
laptops are now running 15W CPUs instead of 35W like in the old age and so
there is little performance increase, no core count increase.

~~~
TheAceOfHearts
I switched from a 2012 15'' MacBook Pro to a 2017 15'' Macbook Pro and the
difference is staggering. The older laptop had a regular SSD, while the newer
one is using NVMe. Although it also has a considerably newer CPU, which likely
plays a huge role in the performance difference.

I sometimes tweak stuff on both laptops at the same time, and doing a side-by-
side restart or even waking up from sleep makes the older model's lesser
performance painfully obvious

~~~
ido
Additionally "thin craze" matters - I remember having an old 15" laptop in the
mid/late '00s that would actually give me backache after carrying it in my
backpack for a while.

Today I can't even tell if my 13" dell xps is in it or not.

~~~
majewsky
Interestingly, I cannot really tell if the notebook is in the backpack, but a
water bottle with about the same total weight can totally be felt. Probably
the very beneficial weight distribution of a flat slab that's affixed in an
upright position directly against my back, vs. a cylindric bottle that's
either moving around in the backpack, or stuck to its side and therefore
exerting a sideways torque on my back.

------
cyphar
... until ThreadRipper.

I'm not a fan of either megacorp (though I prefer Intel because their stuff
works much better historically on GNU/Linux), but it should be painfully
obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD
(which is still going to be more powerful and _far_ more affordable than i9
when it launches).

EDIT: To be fair, they do mention this in TFA:

> That these chips are currently little more than a product name and a price
> [...] is a strong indication that Intel was taken aback by AMD's
> Threadripper, a 16-core chip due for release this summer.

~~~
throwaway209402
> it should be painfully obvious that "i9" is just a reaction to the threat of
> ThreadRipper from AMD

This is incorrect. As Ryan Shrout from PcPer notes,[1]

> In some circles of the Internet, the Core i9 release and the parts that were
> announced last month from Intel seem as obvious a reaction to AMD’s Ryzen
> processor and Threadripper as could be shown. In truth, it’s hard to see the
> likes of the Core i9-7900X as reactionary in its current state; Intel has
> clearly been planning the Skylake-X release for many months. What Ryzen did
> for the consumer market was bring higher 4-count core processors to
> prevalence, and the HEDT line from Intel has very little overlap in that
> regard. Threadripper having just been announced in the last 60 days (even
> when you take into account the rumors that have circulated), seems unable to
> have been the progenitor of the Core i9 line, isn't its entirety. That being
> said, it is absolutely true that Intel has reacted to the Ryzen and
> Threadripper lines with pricing and timing adjustments.

[1] : [https://www.pcper.com/reviews/Processors/Intel-
Core-i9-7900X...](https://www.pcper.com/reviews/Processors/Intel-
Core-i9-7900X-10-core-Skylake-X-Processor-Review/Power-Perf-Dollar-Conclusi)

~~~
cyphar
I was not referring to the R&D, rather the way it was announced and is being
marketed. I would be surprised if they didn't rush development and went
straight to marketing as a reaction to AMD. Intel have been resting on their
lorels due to the lack of serious competition, ARM is struggling from what
I've seen.

I probably should've been clearer, but I assumed it was obvious you can't do
R&D on new silicon in 60 days and already have an announcement for it.

~~~
brianwawok
> I would be surprised if they didn't rush development

I thought the development cycle for a new CPU was between 2-3 years (hence the
2 teams, and tick tock thing).

It's like adding new features late in the game for a webapp. Some things would
be very easy (we need a new page for X). Some things would be very hard (I
want you to rewrite the entire app from angular to react). My guess is
something like changing core count is very very hard to do late in the game.
Changing price, maybe changing overclock settings, those would be fairly easy.

So I suspect if Intel didn't see threadripper coming (which I doubt) the
thread counts were set years ago. However threadripper is making Intel drop
the prices a bit.

~~~
paulmd
Intel already makes these high-core-count CPUs for the Xeon market. What's
different here is their presence in the HEDT lineup - but that is basically
just blowing different feature-fuses in the chip and slapping them in a
different box. There is no actual R&D required.

Intel is afraid of these HCC chips cannibalizing their sales of more expensive
Xeons, so they're holding back as long as possible and crippling key features.
If you want ECC, for example, you have to buy a Xeon - or more realistically
for many people, a Threadripper.

------
jitl
The HN title feels like an editorial by admission. The full title is

 _> Intel Core i9-7900X review: The fastest chip in the world, but too darn
expensive

> When eight-core Ryzen costs £300, do any of these new Intel chips make
> sense?_

~~~
kough
s/admission/omission ?

------
msimpson
Between the Intel Core i9-7900X and AMD Ryzen 7 1800X there is a $540 price
difference.

But for that extra cost you get four more threads at a higher clock rate,
twenty extra PCIE lanes, a 500 MHz higher turbo clock speed, and double the
memory bandwidth.

Even with the lesser Intel Core i7-7820X you will get the same thread count
but with a higher clock rate, four extra PCIE lanes, still a 500 MHz higher
turbo clock speed, and double the memory bandwidth for only $140 more.

Now, of course, the AMD Ryzen Threadripper 1950X comes much closer to the
i9-7900X price point.

However, you will sacrifice single core performance to gain twelve more
threads at a lower clock rate. But, you will receive twenty more PCIE lanes,
over twice as much L3 cache, and the same memory bandwidth as the i9-7900X.

So if your plan is to build a 3D render farm, the Threadripper seems quite
appropriate.

Although, if you plan to build a workstation on which to model 3D assets and
to perform preview renders--the Intel i9 series seems more apt.

~~~
brianwawok
Are there really that many people doing 3d model rendering?

My guess would be PC users by count are

gamers > programmers > 3d renderers

For most gamers, seems like i7 or maybe i9 wins in current benchmarks. For
programmers maybe ryzen is a better fit, but I bet it depends on your
language.

~~~
snaily
You're forgetting the CGI industry, as seen in movies.

~~~
brianwawok
I am not forgetting it.

Google right now, tells me there are:

155 million "gamers" in the US

3.6 million programmers in the US

I cannot guess that the CGI industry is higher than either of these numbers.

Now very much they may pay for bleeding edge, and spend more dollars on
hardware than programmers. But I am at least 95% confident there are less
people in the US running a 3d program on their desktop compared to running
eclipse/visual studio/atom/vim.

------
paulmd
The real problems with Skylake-X are chipset cost, power consumption, shitty
partner boards, and TIM. All of these are forgivable given the performance -
except the TIM.

Chipset cost will come down in 6-12 months after launch like it always does.
This is part for the course, at launch X370 boards for Ryzen were going for
well over $250 as well.

Power consumption is a consequence of AVX512 and the mesh interconnect along
with raw core count. Everyone wants higher clocks, more cores, and more
functional units. There are no easy efficiency gains anymore, and this is the
price - power consumption. This is the "everything and the kitchen sink"
processor and it runs hot as a result - but it absolutely crushes everything
else on the market. This is no Bulldozer.

Board partners with insulators on top of their VRMs was going to come to a
head sooner or later. This is the natural outgrowth of form over function, RGB
LEDs on everything and stylized heatsink designs that insulate the board
instead of actual cooling. The terrible reviews on those boards will sort this
problem right out, they will be unusable in their current form.

Intel has been cruising for issues with their TIM for years (since Ivy
Bridge), this time they finally have a chip that puts out enough heat they
can't ignore it. Intel can get away with making you delid a $200 i5 or a $300
i7, it's not acceptable on a $1000 processor.

There is still a market for a 6-12C HEDT chip that can hit 5 GHz overclocked.
This thing absolutely smokes Ryzen in gaming at stock clocks let alone OC'd -
single-thread performance is still a dominant factor in good gaming
performance and this chip delivers in spades. Combining its leads in IPC and
clocks, it's fully 33% faster than Ryzen in single-thread performance. This is
just a brutal amount of performance for gaming. Unfortunately without
delidding you're not going to hit good OC clocks given the current TIM. And
delidding is a dealbreaker on a $1000 CPU.

TIM is the actual core problem with Skylake-X - everything else will sort
itself out. Skylake-X with solder would be a winner and Intel would be wise to
turn the ship as fast as possible. The 6C and 8C version are priced much more
reasonably and will sell great as long as they fix the TIM problem.

Intel claims they have problems with dies cracking, but AMD manages to solder
much smaller dies, so IMO Intel just doesn't have a leg to stand on here. This
is not something that should be pushed onto the customer with a $1000
processor - you're Chipzilla, _figure something out_.

~~~
vosper
At the desktop level I don't get why people care that much about power
consumption? It means you have to dissipate more heat, okay, that means you
can't use a cheap cooler. But it AFAIK even an extra 100w is cheap in the most
expensive areas, especially when contrasted against productivity or cigarette
breaks, or people sometimes being 20 minutes late to work...

~~~
paulmd
I fully agree, and what's more Intel has the performance to back it up. This
chip pulls a lot but it's wicked fast, it's a massive step forward in
framerates. It combines the minimum-framerate improvements of HEDT/Ryzen with
the single-threaded performance of Kaby Lake. Oh yeah and AVX512 too.

For a sense of perspective here, going from a circa-2012 2600K to a current
7700K is a 40% jump in performance, so it's roughly equivalent to 4-5 years of
gains at Intel's usual tempo - only you also have 10 cores on this platform.
This thing is an absolute monster for gaming or other tasks that lean heavily
on single-threaded performance.

But the power consumption is really the triggering issue for the problems with
shitty partner-boards overheating and the TIM. The TIM is really the
showstopper right now.

~~~
ChoGGi
it's more like 20-25% (depending on what you use it for)

[https://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_v...](https://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_vs_sandy_bridge_2600k_ipc_review)

------
sengork
What I really wonder is when this chip would have been released if AMD didn't
come out with Ryzen line up in 2017.

~~~
arcanus
I'll answer your subtle comment quite explicitly: never.

Intel clearly rushed this out to prevent AMD from having the perception of
leading the space, at least in terms of the largest core count. The article
even references this.

I just hope that it stays competitive, as this is clearly a win for consumers.

~~~
paulmd
Skylake-X certainly would have existed regardless of Ryzen, Intel wants to
sell these chips to Google/Amazon/Facebook as Xeons. The idea that this uarch
is a reaction to AMD is absurd, this uarch has been on the roadmap for years
and at most was pulled forward a few months.

People will attribute anything one of AMD's competitors does to fear of AMD -
780 Ti, 980 Ti, 1080 Ti, Skylake-X, you name it. It's frankly a little comical
given the actual amount of competition AMD put up with Bulldozer and
Fiji/Polaris/Vega against Haswell/Skylake and Maxwell/Pascal - which is to
say, hardly any. NVIDIA and Intel both have their own yield strategies and
release schedules that are largely independent of what AMD does. Intel and
NVIDIA are tweaking prices and specific launch dates - that's about the extent
of AMD's impact on their competitors so far.

(although Intel is definitely paying attention to Threadripper/Epyc now, and
will be pushing core count up on consumer chipsets starting with Coffee Lake
and presumably Cannonlake as well)

The real problems with Skylake-X are chipset cost, power consumption, shitty
partner boards with insulators on the VRMs, and TIM. None of those have
anything to do with a "rushed launch", all of those are issues that have been
slow-boiling for years now.

~~~
Sephr
> Intel wants to sell these chips to Google/Amazon/Facebook as Xeons

They have already been selling vastly superior Skylake-SP (and -EP) Xeons to
Google/Amazon/Facebook since 2015.

------
sliken
I'm all for double the memory bandwidth (and more importantly double the
memory channels) as long as it's not too expensive. But I'm holding off on
buying a new desktop till the AMD Threadripper hits in a few weeks.

I suspect that benchmarks do a really poor job of measuring worst case
performance... which is what users notice. Things like UI lag and audio
skipping. I suspect double the memory bandwidth (assuming a nice fast M.2 SSD)
is the limiting factor for heavy workloads made up of independent tasks.

~~~
dom0
UI lag and stuttering audio is still mostly caused by bad IO scheduling.

------
rbanffy
> fastest chip in the world

Impressive for a x86 as this is, some POWER and SPARC users may disagree with
this assessment. In fact, some Xeon users will doubtlessly scratch their heads
too.

~~~
nl
Only reason they'd scratch their head is to try to find the hair they lost
explaining why they are on POWER or (especially) SPARC.

Power9 is competitive in performance/Watt, and in some weird benchmarks which
no one cares about. I haven't seen anything competitive in any way from SPARC
for a long time.

~~~
rbanffy
The article claims "fastest", not "most efficient" or "best per dollar". It
all depends on the workload - I remember SPARC has some interesting tricks
integrated into its silicon that make things like HANA and Oracle a good deal
faster.

------
pella
Anandtech: [http://www.anandtech.com/show/11550/the-intel-skylakex-
revie...](http://www.anandtech.com/show/11550/the-intel-skylakex-review-
core-i9-7900x-i7-7820x-and-i7-7800x-tested)

------
turblety
But once again they have included a second "processor" on each chip with a
bunch of restricted backdoors that can not be removed [1]. There have already
been bugs [2] and exploits [3] found and therefore no Intel or AMD chip can be
used if you care about security and/or privacy.

If I think Microsoft isn't free enough for me, then I can remove Windows and
install Linux (or BSD). If I think Chrome is sending my data to Google, then I
can remove it and install Firefox.

But if don't like that Intel can take over my PC at any time, watch my screen,
log my keystrokes, prevent me from installing another operating system,
manipulate what I see on the screen, and much much more, then there is nothing
I can do. I can not remove the second chip or remove the code. I must have a
proprietary blob [4] (of who's source code no one can see to audit) running on
my Intel PC.

But the worst thing has to be Intel and AMT's complete refusal to provide a
clean chip to companies that are trying to provide backdoor free computers.
Look at [http://puri.sm](http://puri.sm) [5] for example. They are trying to
provide a PC that does not restrict what operating system or BIOS you run, and
have repeatedly contacted Intel to ask them to provide a batch of chips with
no ME or AMT installed. Even Google, which sells millions of chromebooks
(coreboot preinstalled) have been unable to persuade them. [6]

As Intel and AMT are the biggest players and arguably a monopoly of the
microprocessor market, they have a responsibility to provide safe and clean
processors that customers can truly own. Please try your best not to buy these
products until they resolve these issues.

[1]
[https://libreboot.org/faq.html#intelme](https://libreboot.org/faq.html#intelme)

[2]
[https://www.theregister.co.uk/2017/05/01/intel_amt_me_vulner...](https://www.theregister.co.uk/2017/05/01/intel_amt_me_vulnerability/)

[3] [https://www.intel.com/content/www/us/en/architecture-and-
tec...](https://www.intel.com/content/www/us/en/architecture-and-
technology/intel-amt-vulnerability-announcement.html)

[4] [http://boingboing.net/2016/06/15/intel-x86-processors-
ship-w...](http://boingboing.net/2016/06/15/intel-x86-processors-ship-
with.html)

[5] [https://puri.sm/learn/intel-me/](https://puri.sm/learn/intel-me/)

[6] [https://libreboot.org/faq.html#intel-is-
uncooperative](https://libreboot.org/faq.html#intel-is-uncooperative)

~~~
kennydude
Thing is they could have implemented that in a half-open manner. Sign and hash
releases, and post the code openly.

So you can view the code, verify it's what is installed but for security
purposes can't change it.

The way this reads it seems deliberately suspicious. Having a HTTP server so
low in the hardware stack feels wrong.

~~~
qb45
Well, it doesn't have to be HTTP but you do need some sort of server low in
the hardware to control the machine remotely. Such things have existed for
many years in servers, now Intel integrates them in chipsets for corporate PCs
too.

------
chx
Careful with these benchmarks. The 6700K and the 6700T shows the same
Cinebench R15 on [https://www.notebookcheck.net/Intel-
Core-i7-6700T-Processor-...](https://www.notebookcheck.net/Intel-
Core-i7-6700T-Processor-Benchmarks-and-Specs.196670.0.html) \-- click on Show
comparison chart below Cinebench R15 - CPU Multi 64Bit. I do not think anyone
believes those two CPUs are performing the same -- they are the same Skylake
architecture but one is 4-4.2GHz w/ a 91W TDP while the other is 2.8-3.6GHz w/
a 35W TDP. The difference is decidedly _not_ 1%.

It's not that notebookcheck is benchmarking something outlandish: it shows 668
while this Ars article claims 637 for the 6700K, Notebookcheck benchmarked the
6950X to 1859 while Ars has 1786, both are very close.

~~~
vith
I think you may have some model numbers and/or benchmark numbers mixed up. I
don't see the 6700K or 6700T in the charts in this Ars article.

I see a 7600K with the 637 score, but that lacks hyperthreading and has 25%
less L3 cache compared to the 6700T, so it makes sense that the 17% frequency
advantage is mostly balanced out (there's little IPC difference between Kaby
Lake and Skylake).

You don't have notebookcheck's numbers matched up with the right CPUs either:
[https://www.notebookcheck.net/Mobile-Processors-Benchmark-
Li...](https://www.notebookcheck.net/Mobile-Processors-Benchmark-
List.2436.0.html?type=&sort=&search=i7-6700K+i7-6700T+i7-7700K+i5-7600K&or=1&showBars=1&cinebench_r15_multi=1&cpu_fullname=1&l2cache=1&l3cache=1&tdp=1&mhz=1&turbo_mhz=1&cores=1&threads=1)

To be fair, Ars has the i5-7600K listed as an i7, and Notebookcheck has the
cache sizes wrong:
[http://ark.intel.com/compare/88200,97129,97144,88195](http://ark.intel.com/compare/88200,97129,97144,88195)

So there is plenty of confusion to go around.

Edit: Actually the frequency difference may be a bit off from 17%, that was
based off the max single core turbo frequencies. I don't know what the all
core turbos are.

------
rkv
Out of the loop when it comes to the latest processor technology. How does
this chip maintain the same TDP as a 6850K but with 4 more core? Same size
(lithography), roughly the same freq, memory size/types.

~~~
reubenmorais
No integrated GPU.

~~~
sliken
Er, neither has a GPU. The i9-7900X has the benefit of generation of tuning
and runs at a slower base clock (3.3 vs 3.6 GHz for the 6850K). Slightly
smaller l3, and a much larger but also higher latency L2 helps as well.

I suspect moving from a ring bus to a mesh helps as well.

------
dis-sys
> fastest chip in the world

what a huge load of biased non-sense! I have a machine pretty similar to the
one mentioned in the article below, it is using Intel processors released ages
ago, in fact they were from decommissioned servers from some random data
centres. I'd willing to bet that machine with "the fastest chip in the world"
is significantly slower/more expensive than mine when it comes to my long list
of day to day development tasks.

Oh, don't forget to mention the fact that the "fastest chip in the world" can
reach >100 degrees when fully loaded. Maybe Intel should pay some review sites
to claim it to be the processor most suitable for cooking a meal.

[https://www.techspot.com/review/1218-affordable-40-thread-
xe...](https://www.techspot.com/review/1218-affordable-40-thread-xeon-monster-
pc/)

In case you want to argue that my machine has two Xeon - you can actually
order one single Xeon from newegg.com which is more recent, put it into a
consumer motherboard and beat the xxx out of i9-7900x. There is no way a
10-core Intel processor could possibly be the "fastest chip in the world".

~~~
Sorreah
The 20+ core Xeons run at 2.1 or 2.2 Ghz, while this runs at 4, with a newer
architecture. I don't think it's a big stretch to call this the fastest chip,
especially since we're referring to consumer chips and not server chips that
cost 9000 eurodollars (as is the case for those 20 core Xeons).

Also, have both of these setups run a mixed workload that isn't absolutely
parallelizable and watch the Xeon struggle.

~~~
dis-sys
1\. the Xeon I am using can Turbo to 3.1Ghz, sure, it is slower than 4G, but
the sheer core count makes it much faster in the day to day development tasks
many people face. consumer grade or not, it doesn't matter when there is no
special requirements. you just buy components from your favourite vendors and
put it together, that is all.

2\. you can buy a pair of such 10-core Xeon for almost the same price of a
single i9-7900x.

3\. i9-7900x faces the exact same problem when the workload can not be
paralleled - you can buy a much cheaper quad core intel processor that
overclock well, you can push it to say 4.5 or 5G and beat the "fastest chip in
the world".

~~~
Sorreah
You are comparing _2_ chips to a single chip and trying to argue against the
claim that the single chip is the fastest yet.

I'll humour you however.

Your Xeon turbos to 3.1 if only 1 core is stressed, but it's all core boost is
2.4 as per [https://www.pugetsystems.com/blog/2015/07/09/Actual-CPU-
Spee...](https://www.pugetsystems.com/blog/2015/07/09/Actual-CPU-Speeds---
What-You-See-Is-Not-Always-What-You-Get-675/)

I don't see how your 10core part boosting to 2.4 Ghz is "much faster in the
day to day development tasks" given that it's 2 CPU architectures behind and
clocks at almost half the 4.0 Ghz achieved by the 7900x (all-cores boost).

But even a pair of those 10 cores is probably slower (caches are not shared,
all-core boost almost half while roughly 5% slower in IPC performance due to
the jump from Broadwell to Skylake).

So I don't think you're actually trying to argue that this isn't the fastest
chip yet, but that it's a bad deal compared to looking around and buying some
used server parts.

And yes, that's a better deal, but also a used i9-7900x is a better deal than
a new i9-7900x...

~~~
dis-sys
The link I provided contains detailed benchmark results of the mentioned
system against 6950x. In quite a few workloads, e.g. SPECwpc, it is beating
6950x by up to 42%. See page 4. The maths is really simple here - i9-7900x
need to beat i7-6950x by 40% to match that performance.

i9-7900x is _NOT_ the fastest chip in the world, not even the fastest intel
chip. Consumer/server difference is purely a marketing thing, my Xeon based
workstation running CS:GO on daily basis is not a server.

------
polskibus
I'm a bit disappointed - it said "review" in the title, but there are no
interesting details or tests inside.

~~~
TwoBit
It has multiple benchmark results.

------
Frogolocalypse
Am i the only person to read that and think "a couple of reasons to buy and a
whole bunch to not"?

------
gcb0
why does the title says exactly the opposite of the article?

maybe if it ended with "for very specific cases" it would be more true.

------
fasterthanjim
"I wanna go fast!" \- Ricky Bobby

~~~
fasterthanjim
What do you think this is, Reddit?

~~~
fasterthanjim
:(

