
Intel Announces 8 Core I9-9900KS: Every Core at 5Ghz, All the Time - Doubleguitars
https://www.anandtech.com/show/14402/intel-announces-5-ghz-all-core-turbo-cpu
======
ksec
I used to be Pro Anandtech and consider them one of the best Sources online
for Hardware News. But the fact they have yet write a single post, big or
small about Intel's Zombieload and its implication on performance worries me a
bit.

Then there is the "Intel" benchmarks as usual [1] on GPU. Trying to suggest
the 2 CPU were both running at 25W TDP to give a "fair" comparison, without
mentioning the Ice-Lake U CPU were running with 50% more memory bandwidth vs
the AMD Ryzen. And we know Graphics Benchmarks do depend a lot on memory
bandwidth. The memory used was somehow mentioned in Toms or Other Sites but
not Anandtech. ( Although none of them had mentioned the bandwidth difference,
it was up to the reader to work them out )

Anyway none of these Consumer CPU upgrade interest me anymore ( Although any
improvement to iGPU would be great ) I am eagerly waiting for a 2S - 128 Core
EPYC 2 on a Server or AWS to play around with it.

[1] [https://www.anandtech.com/show/14405/intel-teases-ice-
lake-i...](https://www.anandtech.com/show/14405/intel-teases-ice-lake-
integrated-graphics-performance)

Edit: And the lesson here, never trust a single news source. Always have a few
option opened and fact check yourself. ( If you have the time )

~~~
spamizbad
It's not just Anandtech.

I feel like the entire "PC enthusiast" review space has dropped the ball on
hardware vulnerabilities. Reevaluating performance between microcode and OS
patches is an afterthought, and when a new CPU hits the market the numbers are
presented without the obvious disclaimer that these performance gains may
evaporate within months.

Some even perpetuate the "only relevant to datacenter" myth despite the fact
that security researchers have shown to be able to exploit these
vulnerabilities with JavaScript in the browser.

~~~
keldaris
I'm glad the PC enthusiast space hasn't succumbed to the wild hysteria caused
elsewhere by the side channel issues. It's tiresome to see every new variation
people come up with reported as a new apocalypse all over again. Half the
reason I still pay attention is to find if there's a new Linux boot switch I
need to turn on to disable some new performance regression.

Even though I'm personally in the fortunate position not to have any
reasonable exposure to these vulnerabilities, I wouldn't be particularly
worried even if this wasn't the case. It's been well over a year since
Meltdown and Spectre came out and there still hasn't been a single case of
anyone successfully using these vulnerabilities to productive ends in the wild
that I know of. Obviously, cloud computing vendors need to pay attention and
there are legitimate business concerns that are affected by this, but insofar
as personal computing goes? If people persist in the ridiculous notion that
constantly running completely arbitrary code in naive sandboxes is a great
idea, I imagine there will eventually be issues, but so far the issue seems to
be vastly overblown in the popular media.

~~~
spamizbad
I don't see how the act of taking and publishing measurements after microcode
and OS updates constitutes hysteria. It's my understanding, at least on
Windows, you pretty much have to opt-out of these patches or manually install
an update that disables the mitigations.

I fully support a users right to bypass these mitigations, and you're correct
that your typical desktop user, at least today, isn't a target. But it seems
odd that websites dedicated to performance computing have a blindspot to how
automatically installed updates will impact performance.

~~~
wtallis
> I don't see how the act of taking and publishing measurements after
> microcode and OS updates constitutes hysteria.

It's quite easy to sensationalize benchmark results even unintentionally. The
average reader of PC hardware review sites is totally willing to latch on to a
microbenchmark result that shows a 20% performance drop and claim that it's
disastrous for performance, even if the actual added delay to real-world
operations is a fraction of a millisecond and thus will almost never cause the
result of your user input to be delayed by even a single frame. There's a
certain degree of irresponsibility in publishing results that you know will be
taken out of context by almost everyone who reads them. I've discontinued
benchmarks in the past because it was frustrating seeing readers pretend like
they show a meaningful difference between products when the reader's workload
never comes close to the workload represented by that benchmark.

~~~
chmod775
Did you just make an argument against PC review sites publishing any
benchmarks at all because readers can't be trusted to interpret them
correctly?

That would kind of defeat the point.

If they published a benchmark in the past and don't bother to correct the
benchmark when it becomes out of sync with reality - that is just bad
journalism.

Nobody is saying you should go and cherry-pick benchmarks after the
mitigations hit, but you should definitely check the benchmarks you already
published once.

These sites can and should expect an informed reader.

In any case: Leaving wrong information up uncontested helps neither "experts"
nor laymen.

~~~
wtallis
> Did you just make an argument against PC review sites publishing any
> benchmarks at all because readers can't be trusted to interpret them
> correctly?

No, and you should know better.

> If they published a benchmark in the past and don't bother to correct the
> benchmark when it becomes out of sync with reality - that is just bad
> journalism.

Proper practice is to publish the full test conditions, including software,
firmware and nowadays also microcode versions. The availability of newer
versions does not make older results any less true.

At AnandTech, we make all reasonable attempts to keep a thorough database of
older hardware tested on newer benchmark suites, but the time this requires
means we cannot re-test everything multiple times per year. I have over 200
SSDs and counting in the collection, and that test suite is over 30 hours
long. The collection of CPUs is much larger. GPU reviews typically have fewer
back-catalog hardware entries because updating to new drivers a few times a
year is often unavoidable. You can browse the results for current and previous
test suites at
[https://www.anandtech.com/bench/](https://www.anandtech.com/bench/)

> These sites can and should expect an informed reader.

You don't read the comments as often as we do.

------
lawrenceyan
Intel has fallen so far. It's honestly a shame to watch at this point.

I remember back when Sandy Bridge was first released, and I was extremely
pleased by the performance improvements my new chip was able to provide. Did
they really manage to mess everything up within such a limited timespan? Or
was there just always a hidden incompetence that never showed itself until
now?

~~~
dgacmu
It's not Intel, it's the end of Moore's law. Intel's problem is that they are
not well positioned to capitalize on the specialized processors that will be
required to continue ekeing out advances for the next decade or two before
we're entirely up a creek. :)

~~~
lawrenceyan
From how Apple and AMD are doing with their own processors though, it seems
like Intel is just fundamentally doing worse even as things become more
difficult with smaller transistor sizes. Apple is going to replace Intel with
their own processors because Intel has failed to meet requirements. AMD, with
a shoestring budget basically on the verge of bankruptcy the entire time they
were doing their R&D, managed to build out a new architecture that has
provided amazing results while Intel has basically had nothing to show in the
same time.

But perhaps there's something I'm missing here. Is there a misconception or
lack of information here on my end that needs to be clarified? I can only make
my analysis largely as an outsider looking in when talking about
semiconductors.

~~~
dgacmu
Intel was ahead, and hit the wall first. Apple & AMD are not ahead, they're
just catching up. I don't want to understate how big a problem that could be
for Intel, of course. But they're also doing it on low margin parts, and Intel
continues to make bank with their data center parts.

I don't think any of this represents a short-term problem for Intel, other
than the general downturn in processor sales because fewer people will need to
upgrade. But I think it represents a very serious long-term threat.

They have some really cool technical advances, like 3D xpoint. But I'm
concerned that they do so badly on embedded and custom integration from a
long-term perspective.

~~~
vbezhenar
Apple sold millions of iPhones with 7nm chips while Intel struggles to build
comparable 10nm chips and keeps releasing 14+++ nm. AMD will release 7nm chips
very soon. It does not seem like they are catching up. Quite the opposite.

~~~
akvadrako
You can’t compare nm between vendors - it’s just marketing numbers.

~~~
throwaway2048
Not directly no, but the actual feature size and density of TSMC 7nm and Intel
10nm are comparable.

~~~
pkaye
What about die size?

~~~
wtallis
Then you have to ensure you're comparing chips designed for the same market
segment. Die size comparisons work well if you're talking about a Cortex-A53
on 16nm vs 12nm. It doesn't work as well when you're talking about a full SoC,
or even a desktop CPU+GPU combo where core counts for both sides of the chip
can vary greatly.

------
lkschubert8
Should they be describing it as 8 cores 16 threads when there have been
multiple security vulnerabilities that have to turn off hyperthreading to be
mitigated?

~~~
saltyshake
Besides cloud providers running VMs /containers on the cloud, is
Spectre/Meltdown really such an issue for day-to-day consumers ?

~~~
infotogivenm
Yes. I think this is a common misconception.

These attacks work fine in the browser, as researchers continue to show. They
allow complete bypass of any native app sandboxing layers. Surely you don't
run everything on your box as root all the time.

~~~
mjrow
I'll keep HT on because I use NoScript and I encourage others to do the same.

~~~
d33
Meh. It doesn't require Javascript for your computer to run logic described by
others. Browsers are such complex machines that it wouldn't surprise me if you
could for example craft a malicious SVG that would bypass that, or a turing-
complete CSS file that triggers a vulnerability...

By the way, does NoScript actually block in-SVG javascript?

~~~
dual_basis
Sure, but we all take risks every day. If you're worring about turning-
complete CSS files exploiting Spectre and Meltdown then you probably don't
leave the house much.

~~~
ben_w
We _know_ that attackers have reason to exploit literally all compute
resources they can find a way to access. This is more like worrying about
leaving the house during an epidemic of exploding ebola-infected pigeons — if
you can do something about it, you should.

~~~
dual_basis
Attackers also have to consider cost/benefit analysis when evaluating methods
of attack. Claims that "CSS is Turing complete" require a user to act as a
"crank" [0], so there are lower-hanging fruit out there than trying to program
complicated logic which can utilize the Meltdown / Spectre exploits in CSS.

[0]
[https://news.ycombinator.com/item?id=10734966](https://news.ycombinator.com/item?id=10734966)

------
shereadsthenews
I don't get it. If it has an all-core frequency of 5GHz, doesn't that mean
they've left some single-core boost on the table? Or have they hit some other
limit and this part is basically free of thermal limits?

~~~
gchamonlive
I believe Intel processors can't boost all cores. At least some tests I have
done with my notebook processor (i7 8550u - 4c8t) with `stress -c n`, being n
the number of processors, show that for n > 1 the processor doesn't reach
4ghz, only about 3.7ghz, while the package temps are still at around 70 °C.
Only a single core on full load reaches 4ghz before throttling.

~~~
clarry
> I believe Intel processors can't boost all cores.

And that's exactly what shereadsthenews's point is. They can't boost all
cores, and they are not boosting _any_ core beyond the all-core capacity if
it's truly a CPU that runs at 5 GHz all the time.

~~~
XMPPwocky
Turbo Boost can certainly apply to all cores- the limits you hit are TDP and
time based, there, not strictly thermal.

So, for example, my old laptop CPU would clock itself up to 2.7GHz on all
cores... well, okay, it was a dual core, so that's not saying much, but still.
But it'd only maintain that boost for a few seconds- under sustained load it
dropped down to 2.5. This wasn't because of thermals, but rather because
2.7GHz was a Turbo Boost frequency, and once the PPL timer runs out...

~~~
XMPPwocky
And to explain why they don't have, say, one core boost to 5.1GHz...well,
let's see what siliconlottery says.

> As of 3/16/19, the top 38% of tested 9900Ks were able to hit 5.0GHz or
> greater.

> As of 3/16/19, the top 8% of tested 9900Ks were able to hit 5.1GHz or
> greater.

So, Intel'd cut their yield by more than a factor of four if they only let
parts that could hit 5.1 into this bin. For a 2% single-core performance
boost...

~~~
mackal
I think those numbers are for chips that can 5.1 GHz all core though, which is
probably a lot less than 5.1 GHz single core.

~~~
XMPPwocky
As far as I know, K-series parts don't support binning of individual cores- if
you have one bad core that'll only hit 5.0, 1-core turbo to 5.1 will still
result in the OS scheduler periodically picking that core to use, it clocking
up to 5.1, and problems resulting.

Might be wrong, though.

~~~
wtallis
Intel's Turbo Boost 3.0 [1] was their attempt to take advantage of the fact
that some cores on a chip can clock higher than others. It does not work well
in practice, because it requires too much collaboration with motherboard and
OS vendors. This feature is not available on their desktop platform, which the
i9-9900KS uses.

[1] [https://www.intel.com/content/www/us/en/architecture-and-
tec...](https://www.intel.com/content/www/us/en/architecture-and-
technology/turbo-boost/turbo-boost-max-technology.html)

------
ac130kz
Some of the 9900K chips are able to push 5.2Ghz, this is not a proper answer
to the AMD's new lineup

~~~
kitchenkarma
I have one constantly pushing 5.1Ghz (disabled speedstep etc. been stable for
months). I bought it because there was no comparable AMD cpu and as far as I
know AMD is still behind. Why do you think it is not a proper answer?

~~~
clarry
I think by AMD's new lineup they're referring to Ryzen 3000 series, which
isn't out yet. If the rumors are true, the top models come with 12 to 16
cores, higher IPC and higher clocks than the current Zens, pushing 5GHz boost.

A current 9900K might be some 20%-30% faster than a current Zen, but it will
no longer be so when with the new lineup.

Meanwhile mitigations are eating up Intel's performance advantage..

~~~
kitchenkarma
I saw some leaked benchmarks today and it doesn't look great. I really hope
AMD will kick the Intel (I even have Ryzen 1800x too and will buy the new 16
core one), but I need fastest possible single core performance and by the look
of it at best it will be the same. But AMD has other problems like huge DPC
latency, which makes it difficult to use for real time computations. If Ryzen
happens to have the same single core speed as 5Ghz 9900k and pack 16 cores
capable of delivering it each, I'll swap my Intel in no time.

------
zwerdlds
Still marketing SMT. Interesting move.

~~~
snvzz
Desperate, is what they are.

------
qwerty456127
Why 5 Hz all the time? I'd love to have such an extremely powerful CPU but I'd
actually appreciate if it could downclock itself automatically and stay as
cold as possible whenever I don't need it's full power. Some times I run heavy
computations and having 8 5Hz cores sounds great but most of the time I just
read or write something so even 1000 Hz sounds an overkill.

~~~
icegreentea2
Base frequency isn't the same as lowest frequency (ya... it's weird). Base
frequency is vaguely related to the idea that if you had all cores running at
the base frequency, you would run just about at the system's TDP (it's really
a complete mess, this is a simplification). Your system can still drop CPU
cores down to 400-800MHz in low energy states.

What this announcement is basically saying is that Intel now has a 8 core chip
where all 8 cores can run at 5GHz indefinitely "out of the box".

~~~
vbezhenar
According to siliconlottery.com 38% of 9900K are overclockable to 5 GHz.
Probably they just decided to select good chips from 9900K at factory, those
not exactly new chips.

~~~
zerd
That's mentioned in the article: "The new Core i9-9900KS uses the same silicon
currently in the i9-9900K, but selectively binned in order to achieve 5.0 GHz
on every core, all of the time."

------
dogma1138
Most 9900K can hit 5.0 all core OC for non-AVX loads.

With AVX 4.8-4.9 is still doable without hitting the top 30% of CPUs in the
CPU lottery.

My 9900K does 5.1 without any AVX offset but this is a top 10-20% CPU if the
figures form Silicon Lottery are to be believed.

So it’s not that surprising Intel can simply bin CPUs to do 5.0 at near stock
voltages since many resellers have been doing just that.

~~~
sixothree
What does that spell for regular 9900K then?

~~~
dogma1138
Nothing if you don’t care about AVX workloads you can get a 9900K and set it
to 5.0ghz with an AVX offset of 2 pretty much out of the box.

Unless the KS would guarantee a 5.3-5.4 all core OC I don’t see it being
anything more than a PR release anyhow.

That said I’m not even sure if the 9900KS doesn’t come with an AVX offset to
begin with most higher end motherboards come with a 9900K 5.0 preset anyhow
which sets the voltage to about 1.3-1.325 and an AVX offset of 3 it just yells
at you that you need a good cooling solution and this is not guaranteed to
work.

------
IlegCowcat
I am likely the odd one out here, but wouldn't having the capability to turbo
a single core to, let's say, 5.5 GHz or higher as factory stock be more useful
in real life than the one or all eight core turbo to 5 GHz instead of 4.7?
There are still enough single core/single thread apps out there that could
benefit from faster single core performance, and this newest and hottest (also
in temperature) i9 cannot go faster in single core than the 9900K.

------
kitchenkarma
I have 9900k binned for 5.1Ghz all core. Absolutely brilliant CPU. I wish
there was 16 core version though.

~~~
mabbo
What's the power usage on that? I could imagine the heat from it keeping your
home warm on a cold winter's night.

~~~
kitchenkarma
I didn't measure. It is not too hot. I am typically getting 50-65 C under my
workloads.

~~~
kitchenkarma
I forgot to add - it's been delidded. Crazy I know.

~~~
doyoulikeworms
Woah, what do you use it for?

~~~
kitchenkarma
Ableton :-)

------
Jonnax
Is it going to be that much faster? 300mhz faster than the current top one
according to the article.

~~~
XMPPwocky
And the speed of my overclocked 9700K. If anything, this is just a "Hey, some
people can cool an 8-core CPU at 5GHz, let's make a new bin for the 40% of
CPUs that can maintain that" release.

------
polskibus
I wonder how much of that power will be eaten by spectre et.al mitigations.

------
ChuckMcM
From an interesting historical perspective, I mark the end of Moore's Law in
2001 with Intel's prediction of a 5GHz "Netburst" in 2005, which could not
keep itself from melting. Somewhere I have a marketing road map of 5GHz in
2005, 10GHz in 2010. It was aspirational of course, but seeing what had to
happen between then and now in order to get a chip that runs at 5GHz all the
time based on their architecture is illuminating of the challenges they face.

~~~
earenndil
That's an overexaggeration, IMO. Moore's law didn't really start losing steam
until 2014-2015.

~~~
ricardobeat
If you look at transistor count, yes. But single-core performance has
stagnated since ~2003, that’s when we hit the 3Ghz mark. Progress since then
has been a lot slower.

~~~
earenndil
GHz are basically meaningless. The 2GHz CPU in my laptop is an order of
magnitude faster than anything 3GHz from 2003.

~~~
ricardobeat
True for practical uses, most of the performance increase comes from more
bandwidth and parallelism. But it's a mere 2-4x increase for single-thread
performance, over 15+ years: [https://preshing.com/images/integer-
perf.png](https://preshing.com/images/integer-perf.png)

------
Traster
So as I understand it, this isn't new silicon, it's just binning the existing
9900K and if you wanted an overclock 9900K you would have just gone to Ciara
who obviously bin and overclock and verify their systems anyway. So now you go
to Ciara and Ciara go to Intel and buy a 9900KS instead of previously where
you would go to Ciara they would go to Intel and buy a 5 9900K's and find the
one that would've been as fast as the 9900KS anyway.

------
Epopeehief54
8-core processor that will run at 5.0 GHz during single core workloads and
multi-core workloads."

Under full AVX workloads using Intel stock cooler?

Highly doubt it.

------
Narishma
Is that a typo on the table or do those CPUs really cost the same whether they
have an integrated GPU or not?

~~~
wtallis
Those numbers are Intel's "Recommended Customer Price", not actual retail
prices. The -F parts really are listed with the same RCP as the parts with
GPUs enabled. No, it doesn't make much sense, but Intel _has_ been
experiencing a CPU manufacturing crunch, and the desktop market gets the short
end of the stick when that happens.

------
gigatexal
There's no doubt about it: this will be a beast of a gaming chip. It will also
likely cost an arm-and-a-leg (it has to, it's binned silicon meaning it's
supply constrained) and likely have a really high TDP.

------
gumby
Curious what happens when you call into the vector (AVX hardware).

------
IgorPartola
So now Zombieland et al can be exploited even faster!

------
Thev00d00
Now you can run the speculation mitigations much more quickly!

------
happycube
It's nice to see AMD competing well enough for intel to actually push what
their process can do. Finally!

------
coliveira
Well, this also means that attacks exploiting speculation with run much
faster!

------
bashwizard
I hope it includes a horde of running zombies.

------
bertomart
just in-time for AMD's computex keynote...nicely played

------
lousken
no i7-9700KS? disappointing

------
OrgNet
hyperthreading?

