
Intel Core i7-7700K Kaby Lake review: Is the desktop CPU dead? - legodt
http://arstechnica.com/gadgets/2017/01/intel-core-i7-7700k-kaby-lake-review/
======
mpweiher
Hmm...sure puts all the criticism Apple got for _not_ waiting for Kaby Lake
for its MacBook Pros into perspective.

We are in an effective post-Moore's law world, and have been for a couple of
years. Yes, we can still put more transistors on the chip, but we are pretty
much done with single core performance, at least until some really big
breakthrough.

On the other hand, as another poster pointed out, we really don't _need_ all
that much more performance, as most of the performance of current chips isn't
actually put to good use, but instead squandered[1]. (My 1991 NeXT Cube with
25 MHz '40 was pretty much as good for word processing as anything I can get
now, and you could easily go back further).

Most of the things that go into squandering CPU don't parallelize well, so
removing the bloat is actually starting to become cheaper again than trying to
combat it with more silicon. And no, I am not just saying that to promote my
upcoming book[2], I've actually been saying the same thing since before I
started writing it.

Interesting times.

[1] [https://www.microsoft.com/en-
us/research/publication/spendin...](https://www.microsoft.com/en-
us/research/publication/spending-moores-dividend/)

[2] [https://www.amazon.com/MACOS-PERFORMANCE-TUNING-
Developers-L...](https://www.amazon.com/MACOS-PERFORMANCE-TUNING-Developers-
Library/dp/0321842847)

~~~
spuz
Kaby Lake offers significant reductions in power consumption which is
important for longer batter life on laptops and definitely something Apple
should be concerned about.

Laptops that have had upgrades from Skylake to Kaby lake reported significant
increases in battery life:

[http://www.itworld.com/article/3154243/computers/12-things-y...](http://www.itworld.com/article/3154243/computers/12-things-
youll-get-in-pcs-with-intels-new-kaby-lake-chips.html)

[http://www.theinquirer.net/inquirer/news/3001791/lenovo-
thin...](http://www.theinquirer.net/inquirer/news/3001791/lenovo-
thinkpad-x1-carbon-upgrade-packs-kaby-lake-usb-c-and-a-15-hour-battery-life)

[http://www.pcworld.com/article/3127250/hardware/intel-
kaby-l...](http://www.pcworld.com/article/3127250/hardware/intel-kaby-lake-
review-what-optimization-can-do-for-a-14nm-cpu.html)

[http://uk.pcmag.com/new-razer-blade-stealth-
late-2016](http://uk.pcmag.com/new-razer-blade-stealth-late-2016)

~~~
post_break
Except it seems like any time there is a nice drop in power consumption Apple
says "oh look at how much thinner we can make it by shipping it with a smaller
battery!"

~~~
scarlac
A physically smaller battery usually translates into a weight reduction. I'd
even argue that it's the point of laptops: A mobile/light computer. But how
you balance the two is a matter of opinion.

If you look at the amount of battery-saving features they have been working
on, it's obvious that they focus on lowering consumption instead of engaging
the spec numbers game.

~~~
twic
> But how you balance the two is a matter of opinion.

I guess. But i do wonder if there isn't some framework for thinking about the
tradeoff in a quasi-objective way. Is it possible to make any kind of general
statements about the marginal utility of improvements in weight/thickness and
battery life?

One hour of battery life lets me take my laptop to a meeting, or sit on the
sofa for a while. Two hours lets me watch a movie on the battery. Eight hours
lets me work away from a power socket all day. Twelve hours lets me do that
_and_ read HN in the evening. Twenty-four hours is longer than i'm ever away
from a power socket. It feels like there's steadily increasing utility up to
that "as long as i want to be sat in front of a screen in one day" point.

A 15 inch, 2.41 cm thick, 2.54 kg laptop rests comfortably on my thighs, goes
in a padded envelope, and fits in my satchel without taking up much space. A
15 inch, 2.79 cm thick, 2.5 kg laptop somehow did seem a lot more ungainly. A
15 inch, 1.55 cm thick, 1.83 kg laptop fits in a bag just as easily, and is
comfortable to hold in one hand. Given that i don't often hold a laptop in one
hand, that seems like a small increase in utility.

Are there other reasons why that 0.86 cm of thickness and 0.71 kg of weight
are a real improvement? What does it let me do that i couldn't before?

~~~
arximboldi
I don't know about thickness, but there is a reason I prefer light laptops:
carrying them in a backpack. If I have a 20min commute by bike every day in
which I carry the laptop, it really does make a difference. Your back will
thank you for those 0.71 kg you took off it.

~~~
untog
Or you could put the whole weight of your laptop in a bike bag.

~~~
jbergens
I had to do that, the MacBook Pro is very heavy

------
alkonaut
The reason there is diminishing returns is simply because intel doesn't have
to try. Shrinking the process is terrifyingly expensive and difficult, yet
they slowly are doing it.

If intel has no competition, they tweak the product enough to give it a 2017
sticker and sell it at the same price. Because why not?

Only more cores will save the performance trend. So if Intel had any
competition in the enthusiast segment, the enthusiast i7's in the $350-500
range would have gone from 4C/8T to 6C/12T to 8C/16T in the years since Sandy
Bridge.

~~~
TsomArp
Also, for 99% of the users performance is good enough. People basically use
PCs for browsing, email, word processing, spreadsheets, presentations, and
some photo retouch.

~~~
pimeys
I think the PC gamers are a bit more than 1% of the users.

~~~
Sir_Substance
Honestly though, I think even in PC gaming, the processor is not where the
bottleneck is at.

In the early 2000's, buying a new processor every year would yield noticeable
returns in gaming performance. Once we entered the console port life cycle
(circa 2008), where the performance of the dominant consoles defined what
developers put in games, I found that from 2010-2014 I was able to use the
same processor (and not a top of the line one) with no trouble, and only an
intermediate graphics card upgrade.

I'm now on the core that I bought in 2014, and I honestly expect it to be good
until 2020. We'll see a bit more get juiced out of graphics cards, but we're
nearing a stage where the cost of making graphics good enough to overwhelm
modern graphics cards will be so high that it won't make sense from a
cost/benefit perspective. At that point, I expect CUDA applications to be the
driving force behind graphics card R&D.

~~~
untoreh
considering the fact that most AAA titles are glorified visual experiences it
is no surprise cpu is not the bottleneck, requiring probably very little logic
and not having to render more than a 60dg first person FOV. However, when you
try any game that has some flavor of RTS to it, they start trudging, I
remember reading RTS games target 20fps as optimal performance in general,
because of lots of polygons but also because they need to track a lot of stuff
compared to your average scripted experience. Free roamers a la gta/arma/watch
dogs also cpu bound, just like most MMOs which have to display lots of stuff
and track of lots of things. The gaming segment is definitely something that
would benefit from beefier cpus I would say.

Another factor to take into consideration is that people change gpu more often
than cpu (I would blame intel and their sockets policies but I dont know) so
it is obvious you target the lower common denominator which is gonna be the
cpu therefore games that don't use lots of cpu to begin with.

Last argument, cpu usage in games can't really be scaled like gpu usage, you
can turn down shadows at best but a game is gonna always use that much cpu,
being again thelowest common denominator.

~~~
golergka
As much as I love CPU-heavy Paradox RTS games, I really doubt that they're on
Intel's mind.

------
sedachv
Another reason to pass on Kaby Lake is the hardware digital restrictions
management built into the chip:
[http://arstechnica.com/gadgets/2016/11/netflix-4k-streaming-...](http://arstechnica.com/gadgets/2016/11/netflix-4k-streaming-
pc-kaby-lake-cpu-windows-10-edge-browser/)

~~~
niftich
For some people, this will instead be a reason to upgrade, given the popular
content gated behind.

~~~
rl3
I'm excited for it. Helps lessen the burn of waiting for the 6700K's successor
only to find it's the same thing basically.

Still, Netflix requiring DRM on 4K streams is idiotic. Big content sure does
love shooting themselves in the foot.

~~~
anonova
I still can't stream HD video from Google Play or Amazon because I use OSX and
an external monitor.[1] I have no problem paying for content... if I could
actually watch it.

[1]:
[https://support.google.com/googleplay/answer/2528768](https://support.google.com/googleplay/answer/2528768)

~~~
givinguflac
Wow that's just stupid. This is a crappy workaround, but what happens if you
enable display mirroring for your external display? It kills your dual display
setup while watching, but it may enable HD streaming for you. I have not
tested it, just a theory. YMMV.

------
ChuckMcM
Best of times, worst of times. I asked a friend that if they could "fix" or
"improve" any feature of a modern microprocessor what would it be, and they
struggled to come up with anything. It it interesting though that a "nominal"
desktop computer these days consists of a nominally sequential processing unit
(albeit with multiple cores) and a large parallel processing unit in the GPU
with lighter weight cores.

The "painful" part isn't the computation any more its getting multi-gigabyte
data sets into and out of DRAM or moving them around in DRAM. You can finesse
some of that by adding more and more cache on chip but you end up with the
'buffer bloat' problem where you're caches are fighting each other. If you
pick the 16 bytes per computation figure that is sometime bandied about then
your 4.2Ghz average computation bandwidth wants something like 67 GB/second of
_main_ memory bandwidth.

I suspect that is why IBM spent a lot of time on upping the memory bandwidth
in Power8, it does no good to have CPU cycles stalled waiting for memory.

Looking at the existing Intel line up, they have specialized instructions for
some pretty oblique programming tasks, they have tweaked their pipeline and
register pool to minimize bubbles in the pipeline path pretty effectively, and
they have I/O channels that can out perform much of the gear out there. So
what do you improve go after next?

~~~
userbinator
_Looking at the existing Intel line up, they have specialized instructions for
some pretty oblique programming tasks, they have tweaked their pipeline and
register pool to minimize bubbles in the pipeline path pretty effectively, and
they have I /O channels that can out perform much of the gear out there. So
what do you improve go after next?_

How about moving the computation to the data? I.e. add ALUs and similar
structures to DRAM, so there's nearly no need to move the data itself.

Here's an interesting paper a few years ago that dealt with memcpy() and
memset() effectively without using any memory bandwidth:

[https://users.ece.cmu.edu/~omutlu/pub/rowclone_micro13.pdf](https://users.ece.cmu.edu/~omutlu/pub/rowclone_micro13.pdf)

The relevance here being that memcpy() and memset() are already specialised in
x86 by the REP MOVS and REP STOS instructions, respectively.

~~~
B1FF_PSUVM
> add ALUs and similar structures to DRAM

CDC-6600 all the things!

(The 'talk to smart peripherals' approach, if memory serves.)

------
scarface74
Completely off topic...

For someone who doesn't follow Intel chip generations closely, the chart of
the generations of Core I3/I5/I7 chips was informative. Two Observations:

1\. Why doesn't Intel actually name the chips to make them clearer?

2\. The poor Mac Mini that Apple is selling in 2017 uses 2013 era chips and is
way overpriced.

Edit: fixed typos

~~~
kogepathic
> 2\. The poor Mac Mini that Apple is selling in 2017 uses 2013 era chips and
> is way overpriced.

Although the author points out that single core performance hasn't improved
since Sandy Bridge (2011) and multi-core performance has been largely
lackluster since then.

While I agree it's shameful for Apple to be selling the current Mac Mini, the
performance from 2017 chips is going to be only ~20% better than what's in the
Mac Mini now. [0]

It's not a completely fair comparison because the i7-7500U is 15W, but I can't
find any Intel mobile CPUs released in 2015 or 2016 which were dual core and
didn't have a 15W TDP.

[0]
[http://www.cpubenchmark.net/compare.php?cmp%5B%5D=2345&cmp%5...](http://www.cpubenchmark.net/compare.php?cmp%5B%5D=2345&cmp%5B%5D=2863)

~~~
robotjosh
It is the stuff around the cpu that is improving. Those 2013 chips cannot
properly drive a 4k display. Even a 2014 macbook pro with amd graphics is
sluggish on 4k. But, a modern skylake with intel 530 integrated graphics can
drive a 4k screen at 60hz without any mouse lag.

~~~
Stratoscope
> Even a 2014 macbook pro with amd graphics is sluggish on 4k.

I have a late 2013 MBPR (NVIDIA GT750M) with an external 4K monitor in
portrait mode connected via DisplayPort, plus the laptop's display. Both are
running at 60Hz. I don't see any sluggishness or mouse lag at all; everything
is very smooth.

I mostly run Windows 10 on this machine, but it also works fine when I boot in
to OSX.

~~~
quicklyfrozen
I'm sure he's talking about MBPs with only Intel graphics (which frankly
seemed a little underpowered for the Retina display much less a 4k monitor).

~~~
Stratoscope
Actually not: "a 2014 macbook pro with amd graphics".

A while back I tested a 2014 MBPR with the AMD GPU vs. my late 2013 model with
NVIDIA, and the AMD was a bit faster.

~~~
quicklyfrozen
Not sure how I missed that...

------
bhouston
What they are not mentioning is that the performance of the Intel Core
i7-7700K is actually also identical to the Intel Core i7 4790K, which was
launched in 2014 and is 3 generations old and is nothing more than an
overclocked Intel Core i7 4770K which is from 2013. We basically haven't seen
any improvements for this series of chips in years except for a bit of
overclocking with 4970K and now 7770K.

~~~
Declanomous
>... the Intel Core i7 4790K, which was launched in 2014 and is 3 generations
old and is nothing more than an overclocked Intel Core i7 4770K which is from
2013

The 4770k didn't support VT-d, whereas the 4790k did. The 7700k has a much
faster bus speed, and supports up to 64 GB of DDR4 memory.

I know these are small changes, but they make a difference. For instance, I
have a 4790K because I wanted IOMMU (VT-d). I wanted an ITX build, but I went
with micro-atx because I needed 4 dimms of ram to reach 32 GB. With a 7700k I
could have an ITX build with IOMMU and 32 GB of ram.

Here's the ARK comparison:
[https://ark.intel.com/compare/75123,80807,97129](https://ark.intel.com/compare/75123,80807,97129)

I don't know if the 7700k actually supports VT-d, but I would guess it does. I
wouldn't actually buy one until I knew it supported VT-d though.

~~~
dx034
A question as I'm not that familiar with the Intel terminology: What does it
mean that they dropped the "SmartCache"? Is the newer cache not smart anymore
or did it just become standard? Or is Smart actually worse?

~~~
Declanomous
I don't think they actually dropped SmartCache. SmartCache is just a way of
saying that all of the cores share the same cache space. (Generally, the cache
is divided equally among each core for multi-core processors.) The Kaby Lake
architecture is basically the same as the Devil's Canyon architecture with
some minor tweaks, so there is basically no chance they changed how caching
works, as that would be a really major change. Chances are the person in
charge of adding information to ARK doesn't have the ability to release or
verify all of the info yet.

------
tedsanders
This is what the end of Moore's law looks like.

You spend $12 billion in annual R&D (a total that's risen substantially over
the past decade) and then someone accuses you of "not trying" because your
chips aren't that much better than last year's models.

10 years from now, I expect that the death of Moore's law will have permeated
much further into our cultural consciousness. Skeptics cried wolf for years,
and they were wrong, but this time the wolf has finally arrived.

~~~
drewrv
It was always a mistake to call it a law in the first place. People treat it
like it's gravity or something.

~~~
delecti
I always interpret it as more along the lines of "Murphy's Law." It was a
curiously persistent trend more than a precise mathematical construct.

------
Retric
So, as someone still using an OC'd 2600k from 2011 there is still little point
in upgrading which feels crazy.

~~~
ajross
Desktop CPUs are a mature market. Desktop applications tend to be either not
performance limited or limited by something other than the CPU.

Consider that the 7700K in question gets similar performance to your Sandy
Bridge box at probably 1/3 of the power draw. (I had a 4.2GHz 2600K for a few
years and the socket pulled about 102W, so I'm guessing that's what you're
looking at). A 3X increase in power efficiency is certainly not a lack of
progress, it's just that you don't care.

~~~
Retric
It costs around 500$ to get similar performance. So ~500$ / (0.20$/kwh *
0.102kw * 2/3) that's ~37,000 hours or 24/7 for 4 years.

So, no lower power is not a solid reason to upgrade.

~~~
ajross
On a desktop. I _know_ you don't care, I said so. But Kaby Lake is a silicon
product for a bunch of markets (of which "desktop" is a rapidly shrinking
one), and in those markets it represents solid improvement over its
predecessors.

~~~
Retric
I don't think the desktop is shrinking as much as sales suggest. I would
happily drop 2k on a new desktop if there where any reason to do so. Intel
chose to waste 5 years of performance gain on a low end graphics capability's
instead of bumping cheap multi core performance.

~~~
ajross
That use of "cheap" is doing too much work. Obviously Intel hasn't squandered
many-core scalability either, c.f. Broadwell-E which is available with as many
as 10 cores and targetted at exactly your market.

It's just not cheap, being a 240mm2 die (almost twice what Kaby Lake is, with
attendant yield penalties) targetted at a niche market with low volume.

So your point isn't that Intel isn't innovating, it's that you want them to do
it in a way that violates straightforward economics and hand it to you for
less than it takes to produce.

Basically: you got spoiled by the early days of the VLSI revolution. Silicon
scaling isn't doing what you expect any more.

~~~
Retric
At Intel's scale having 4 core chips with integrated graphics or 6 core chips
without integrated graphics and slightly smaller transistor count is not a
significant cost, meaning they could both sell for similar prices.

They don't do this because they have zero real completion and can milk the
high end market. Further, by holding back performance gains they can extend
the upgrade cycle as long as possible.

PS: Cellphones are approaching Desktop chip performance.

~~~
ajross
> At Intel's scale having 4 core chips with integrated graphics or 6 core
> chips without integrated graphics and slightly smaller transistor count is
> not a significant cost, meaning they could both sell for similar prices.

Yeah, that's not how silicon manufacturing works. Tooling up for a new part of
this size and ramping production gets into the tens to hundreds of millions of
dollars. No one does niche-market (and yes, gaming desktops are niche) parts
at the prices you expect.

Low-volume big parts cost a thousand dollars or more, that's no different at
Intel than it is anywhere else.

~~~
Retric
It's going to be ~1:1 with the number of discrete nvidia + AMD graphics cards
sold which is not exactly niche.

Further, Intel already has a range of chips in production making that 100's of
millions per chip vastly overblown. You can measure these numbers in several
ways, but ramping up 100 million of chip X or 80 million of chip X and 20
million of chip version Y is nowhere near 200+ million.

~~~
Retric
Ed: For clarity the cost delta is not 200+ million, the price is vastly more
than 200+ million to make 100 million of chip X.

------
overcast
For my Windows machine which is primarily gaming I only recently updated from
an I7-920 to a Intel Xeon X5675 3.06GHz, for $95 used on ebay. These are NINE
year old processors, that with a good video card still completely max out any
game I throw at it with a resolution of 2500x1600.

My development/main machine is a 2008 MacPro, which has no problems doing
anything I need on there.

I mean besides serious computations, what is my incentive to ever upgrade
here? I'm basically waiting for entire system failure at this point for
either.

~~~
gpderetta
That thing has 6 cores, which is still uncommon for desktop cpus today, it
easily and reliably overclock in the 4Ghz range. It supports loads of ram
(including ECC), VT-d, has 48 PCIe lanes (2.0 though) and is dual socket
enabled. You can get lower clocked models which are even cheaper and they
still overclock to 4GHz.

A motherboard supporting overclocking or dual sockets [1] might cost more than
the cpu though (in fact they cost more now than when they were new).

I own a X5670 myself. Although I bought a motherboard that supports
overclocking, I have yet to bother.

[1] Only the legendary EVGA SR-2 does both, and it still costs quite a bit.

~~~
overcast
Exactly! Mine is at over 4ghz easily.

------
drej
Not sure jabs at Intel are fully justified. I don't think they can be viewed
only be Geekbench scores of their top of the line processor (excluding Xeons
now). When you look at its admirable battle with ARM with their Atoms and the
M line, the story is quite different. Stick computers and fanless laptops,
both with x86 compatibility that ARM just does not have, by definition (yes, I
know of Microsoft's latest wizardry in their Continuum dept.).

So sure, we're hitting some performance limits, but power efficiency (and thus
battery life and/or weight) and form factors are also important considerations
and Intel is making progress there.

~~~
cookiecaper
Yes. Intel's biggest concern the last several years has surely been preventing
ARM from cannibalizing its PC marketshare, and trying to establish itself as a
reasonable option for mobile devices. Corporate energies haven't exactly been
focused on increasing desktop computing performance.

------
everyone
Big photo of a motherboard.. What kind of monstrosity of a mobo is that?
Please tell me all that shit on it is metal and has some sort of cooling
purpose. It looks like a bunch of moulded plastic to make it 'look cool'.

~~~
slantyyz
Short answer, it's a "gamer" board.

Unfortunately, the path to the best hardware on PC is usually with gamer
hardware, which means you end up with hardware with pointless bling attached.

~~~
Alupis
> which means you end up with hardware with pointless bling attached

Sometimes, yes... but in this case, it appears to be a large heatsink spread
out over the mobo. Gamer mobo's are often setup to be overclocked, so larger-
than-normal heatsinks are common.

~~~
AmVess
Actually, that's just a large dust shield. It is bling, but that's ok. There
are people that like to outfit their computers with LED's, glass side panels,
illuminated liquid cooling liquid...you name it. This board is made for them.

------
faragon
For new PCs, Intel chips are amazing. However, e.g. updating a desktop PC with
a 2.8GHz (3.46GHz turbo, 8MB L3 cache) Core i7 from 2009 with a decent enough
video card it is not: spending 300-400 USD on a new CPU for just 30%
performance increase it is non sense, in my opinion.

Please, Intel: add more L3 cache to desktop PC CPUs, and put 6 core instead of
4, not just for the servers.

~~~
pier25
Problem is you also need to upgrade your motherboard, memory, and probably
your CPU cooler.

And if you do that, you might as well just build a new computer with a current
GPU.

~~~
r00fus
Sounds like a product designed to produce new sales. Funny that.

~~~
pier25
Who would have thought?

------
404ed
I really hope AMD's Ryzen is pretty great when it comes out this year.

Intel seems to be diddling about without competition these days, and we the
consumer need AMD to pull a "K7 Athlon" again, which beat Intel to 1 ghz and
was a great processor.

Otherwise it seems everyone's going for more energy efficiency and mostly
mobile (ARM) these days.

------
dmix
There are plenty of benefits to desktop over laptops that go beyond the CPU.

From the article:

> Elsewhere, there are the usual array of Asus enhancements, including its
> excellent 2X2 802.11 a/b/g/n/ac MU-MIMO Wi-Fi, SupremeFX Audio S1220
> solution (featuring a ESS Hi-Fi Sabre DAC), Intel I219-V Gigabit Ethernet
> chip, dual M.2 SSD slots, and enthusiast-friendly features like dedicated
> water pump headers, PMW/DC support across all five fan headers, and SLI
> support

Having a top of the line wifi card, a sound card with a DAC, and dual M.2 SSD
ports, tons of cheap ram, etc, etc. Not to mention the video card potential.

The only thing that makes these less appealing is those recent thunderbolt
boxes for external video cards that also act as docks with DACs and USB-c
ports.

------
supernovae
I can't wait to see the AMD Ryzen CPUS. Time to upgrade my trusty Phenom II
955

~~~
mrmondo
I'm such a pessimist with AMD CPUs, I just have bad memories of power hungry,
hot running chips with lots of microcode bugs and problems with ACPI etc...

~~~
Alupis
> I just have bad memories of power hungry, hot running chips with lots of
> microcode bugs and problems with ACPI

Sounds to me like the original Phenom series of CPU's. They were very power
hungry and ran very hot (140+ Watts at stock clocks)... the newer cpu's are a
bit better, although they still lag in performance.

If AMD's Ryzen benchmarks are to be believed, that seems like it will change
very soon though.

~~~
tormeh
I have an FX-8350 (similar multi-threaded best-case performance to i5-6600k)
and I love it, but AMD's benchmarks are rarely to be believed.

~~~
Alupis
> but AMD's benchmarks are rarely to be believed

I think that statement applies to all vendors, generally.

------
MichaelBurge
I wouldn't mind if they didn't give any better single-threaded CPU performance
at all, if they gave more PCI Express lanes to run more GPUs and storage.

72 PCIe lanes can run 4 GPUs at x16, and an NVMe SSD at x8. I just built a
server where half my GPUs have twice the bandwidth of the others.

------
socialist_coder
I want to upgrade solely for the additional pcie lanes. If you haven't seen
how fast a new m.2 nvme ssd is, you are missing out. It's like the same speed
boost we saw from the first spinning disk to ssd upgrade, again with sata ssd
to nvme. It's crazy fast.

~~~
monochromatic
In benchmarks or in real world usage?

~~~
codebook
Definitely in benchmark.

~~~
monochromatic
I'm not sure I care then. My old Samsung 840 Pro is fast enough that it's
rarely if ever the weak link.

------
stolk
Once the next generation (CannonLake) is out, AVX-512 will be available.

With that, the SIMD width will be doubled, and AVX-512 optimized code could
possibly double in performance as well, as long as the memory can keep up.

[https://en.wikipedia.org/wiki/AVX-512](https://en.wikipedia.org/wiki/AVX-512)

------
alfalfasprout
Honestly, the biggest improvements left in CPU performance comes from
massively increasing the caches. I don't want more cores. I want a freaking
massive L3 cache. I want direct access to insanely fast SSD's.

For anything that isn't purely focused on floating point performance (where
throwing a bunch of GPUs at the problem is fine) memory and storage are now
bottlenecks that can still give us several order of magnitude improvements in
performance.

Then there's the smart move of integrating FGPAs on-die and connecting them
with insanely fast buses. Now we can do cool things like process data coming
in via a 40GBe link on a single on-die FPGA and perform business logic on the
CPU with data that the FPGA writes to memory. OR write an FPGA accelerator for
a function.

Single core performance may not get better... but we still have huge places to
make improvements that will require some changes in how we run code.

------
stagger87
I was kind of hoping they would sneak in AVX-512 finally. It was supposed to
be on some earlier chips (Haswell? Skylake?) but hasn't been discussed since
for these consumer chips. Wonder why? Seems like one avenue for performance
improvements.

------
Symmetry
The most exciting thing about the release is probably the support for XPoint
memory.

[https://en.wikipedia.org/wiki/3D_XPoint](https://en.wikipedia.org/wiki/3D_XPoint)

------
nailer
Site flagged for malvertising:
[https://pbs.twimg.com/media/C1UMvNHXUAEXojJ.jpg](https://pbs.twimg.com/media/C1UMvNHXUAEXojJ.jpg)

A sad day for Ars.

------
sengork
For those who are looking for a more in depth and technical + real world power
user review of this CPU, I highly recommend seeing Techreport's latest
published article:

[http://techreport.com/review/31179/intel-
core-i7-7700k-kaby-...](http://techreport.com/review/31179/intel-
core-i7-7700k-kaby-lake-cpu-reviewed)

------
rphlx
ITT: People bitching/moaning about a 5GHz ~$90 high-IPC core being disappoint,
when in fact it is an engineering marvel.

------
cletus
Couldn't this be because Katy Lake was the previously unplanned third
iteration of Intel's 14nm process? Most Kaby Lake benchmarks I've seen have
been barely marginally better than Skylake. We're really waiting for 10nm at
this point.

------
jpalomaki
While Intel does not have so much competition on desktop which is stuck to
x86, I would assume they would get some pressure on the server side if they
really stopped trying.

------
mrmondo
That title... while I'm not disagreeing with the intent or the message that
has to be classed as clickbait. Like come on, that's one well of an optioned
title for an article that links to an article with a completely different
title. Just, you know - even without any research I'm quite confident that the
company in question (intel) hasn't 'stopped trying', perhaps they've
undervalued a product line or had a lacklustre release but I'm sure there's a
lot of engineers 'trying', again, click bate titles - grr!

~~~
zelon88
I disagree.

With AMD no longer contentious, Intel has, quite literally, stopped trying.

Look at Netburst era Intel vs modern Intel. Behind the 8-ball Intel can show
you what Intel looks like when it's trying.

~~~
Jweb_Guru
Or maybe Intel is having trouble moving to 10nm and there's little it can do
at this point? I have no idea why when it comes to Intel people feel the need
to phrase engineering problems in moralistic terms (and conversely, when it
comes to politics people often want to turn moralistic problems into
engineering ones).

~~~
zelon88
If that were the case they wouldn't release a mediocre interim product that
fails to significantly build-upon any existing performance and features of the
existing product. If you do decide to build that mediocre chip, you don't
price it like a top-shelf one.

We're not talking about when Saab had to keep their new models secret to sell
the rest of the old ones for survival. This is Intel. They control their
market and have no competition. Selling crappier products than you're capable
of is a moralistic problem. And it's one that the industry is complacent with.
Just ask Apple.

If all remaining Intel competition were to disappear tomorrow Intel wouldn't
stop selling chips. They would just stop making them any better. Cost
reduction is just business. If a business doesn't have to spend on research to
stay afloat it shouldn't and it won't.

------
coldcode
This is why Apple will soon abandon Intel entirely, which is why Apple
abandoned the PowerPC as well. Either you progress forwards or you get run
over.

~~~
tossedaway334
Nobody is even close to touching intel in its niche of high single chip
preformance at reasonable power usage. PPC was way way behind when it got
replaced.

~~~
Bud
Not true; Apple's own A-series silicon is starting to be a serious threat. For
instance, the A10 in the iPhone 7 now beats all Intel chips ever shipped in
MacBook Airs in single-core CPU benchmarks. Right now, today. And it's less
than 3 years behind vs. Intel CPUs in the MacBook Pro. It also beats the Mac
Pro's 12-core Xeon (from 2013) in single-core.

This despite the A10 running on FAR less power.

Apple's definitely within shouting distance and is a serious threat to just
replace Intel entirely, at least for its own needs.

~~~
pier25
Achieving the same level of performance with ARM is only half the story.

Apple has to surpass the Core i series to make any sense to switch.

And then there's the whole clusterfuck that would be porting all the x86
software to ARM. When Apple switched to Intel emulation of Power PC was usable
via Rosetta but AFAIK emulation of x86 with ARM is extremely slow.

~~~
skykooler
I wonder whether it could be possible for Apple to build in a hardware
accelerator for the x86 emulator into the chip?

...Though at that point I suppose they'd basically be building an x86 co-
processor into it. No idea if that would actually help performance or not.

------
luckydude
So this is surely self serving, but...

Intel used BitKeeper for their chip dev process for a long time, and during
that time the chips got faster with each release. More than a decade of chips
were developed under BitKeeper and we saw perf improvements with every chip.

They moved to Git and ever since then performance hasn't really moved. The
Atom people were the first to move and the mainstream processors followed.

I said it was self serving, so rain all over me, but I do wonder if there is
anything there. I know we helped them move pretty fast a while back.

~~~
grzm
If it's anything other than coincidence, I think it's more likely that the
cause would be related to why they migrated from BitKeeper to git, rather than
the change in tools itself.

~~~
luckydude
I really don't know. As a butthurt BitKeeper guy I would love for it to be SCM
reasons. I mostly think that had nothing to do with it but when I look at the
data, I wonder. We spent a lot of time helping them with their problems, all
stuff that nobody in the Git world steps up to do (like submodules that
actually work). All that help may not have made a difference.

On the other hand, it went way beyond just SCM stuff, I debugged their various
filer problems (that they blamed on BK and I'm happy to say not a single one
had anything to do with BK, BK just exposed that the filer was buggy). I like
to think our help did make a difference but I really don't know.

What I do know is we did whatever Intel wanted for over a decade and they used
to have BitKeeper in ever powerpoint they wrote about new processor
development. But they have a culture of tools have to be free so I guess they
get what they pay for.

~~~
B1FF_PSUVM
The "toxic culture" is much wider than you realize ... until it bites you.
Then it's too late.

But good luck to you, you did good and played fair, AFAIK.

