
Intel's New Low: Commissioning Misleading Core I9-9900K Benchmarks [video] - esistgut
https://www.youtube.com/watch?v=6bD9EgyKYkU
======
moconnor
I worked in HPC when NVIDIA started taking serious market share from Intel. My
memory of Intel’s performance comparisons were that they were often
technically unsupportable once you scratched the surface.

In one case a third party who were demonstrating how much faster Intel Xeon
Phi was for deep learning admitted that they were comparing highly-optimised
code to unoptimised code in their results.

This doesn’t surprise me at all.

~~~
m_mueller
I've been in the same boat and I completely agree. One thing that's unexpected
to people is that getting decent performance out of a GPU is actually easier
than CPUs - vectorization and multithreading is unified in the parallel
programming model, cache optimizations are mostly not needed. These are the
two biggest time sinks you have when optimizing for CPU, solved right there.
What you instead have to care about is resource utilization per thread, and
that is IMO way easier to reason about and optimize for.

~~~
celrod
Are there any good guides or tutorials? I've found GPUs difficult, in part
because I don't really know where to start.

FWIW, I have an AMD GPU with ROCm. HIP it's a lot like CUDA, so NVidea-focused
tutorials ought to be fine. With the caveat that I'd have to be aware of
hardware differences.

~~~
dragontamer
The main issue IMO is thread-divergence. Because "threads" on a GPU are really
SIMD-elements, things work very differently.

Lets use a simple example:

    
    
        for(int i=0; i<10000000; i++){
            if(someCondition) 
                break;
            else 
                doStuff();
        }
    

In a CPU case, the thread will break out of the loop early on "someCondition".
But in the GPU case, it will only break out of the loop when "someCondition"
holds for the entire SIMD-group.

GPUs execute roughly 32-threads with the same instruction pointer. Lets say
thread#0 had "someCondition" to be true. Then thread#0 will be set to
"disabled", but otherwise, it will have to wait for the 31-other threads to be
done with the loop before continuing.

Even if 31-threads have hit "someCondition" and have broken out of the loop,
the 32nd thread will keep executing the loop until it is done (and threads
0-through-30 will "execute with" the 32nd thread, but will throw away the
results).

That's the key with SIMD. Threads are run in groups of ~32ish at a time, at
the same time. All 32-threads must execute if statements together and loops
together.

In most cases, an if/else statement will be executed by BOTH threads (but the
results "thrown out" by the GPU engine, through execution masks)

------
cbg0
In text form: [https://www.techspot.com/article/1722-misleading-
core-i9-990...](https://www.techspot.com/article/1722-misleading-
core-i9-9900k-benchmarks/)

~~~
asr
This article says that Techspot has obtained the 9900 (without being bound by
an NDA, as others are), and that Intel is releasing misleading results while
other reviewers _are_ bound by an NDA, but that Techspot is not going to show
their own benchmarks--which could refute the misleading results Intel has
authorized for public release--out of a sense of "professionalism."

 _Actually_ , any "professional" journalism outfit would show the public the
newsworthy results, not withhold this information from readers because of a
desire not to piss off industry contacts.

~~~
m-p-3
The thing is you just need to piss off an industry contact once to be taken
off of their list, essentially neutering themselves in the future and making
them less competitive compared to other websites.

It's a sad state of affair, but that's the reality of it.

~~~
craftyguy
They were already neutered, they weren't given a part to review. the part they
allegedly have was probably gotten through unofficial means, since it is
highly unlikely intel would just give out parts to media outlets ahead of a
launch without an NDA attached to it.

------
c2h5oh
Also they seem to have tested AMD CPUs in "game mode" which disabled half of
the cores (one of two CCXes)...

~~~
iddqd
I like that the "Game mode" makes the AMD CPU's perform worse in game
benchmarks...

~~~
cma
The reviewers used Game Mode on a CPU without NUMA, so it was pointless and
only meant to damage.

~~~
Filligree
It can still help, if used in precisely the right circumstance. Not that this
makes your statement any less true, I'm just being needlessly pedantic.

One amusing thing I've noticed is that, playing the _same_ game (Minecraft),
Windows runs it ~25% faster in game mode than default NUMA.

In Linux the situation is reversed; it runs 10% faster with NUMA enabled,
maybe because the Java garbage collector is NUMA-aware and I'm using enough
memory that it's split across NUMA nodes anyway.

But either mode is faster in Linux than Windows.

------
zoggenhoff
Does Intel really think this approach is good for them? As a technical person,
all I see is a company in trouble with products they need to lie about. This
goes beyond market speak - it's deceptive.

~~~
berbec
The people who won't be fooled by this are likely the customers who are
interested in the actual 10% difference for the high end and likely want this
chip anyway.

~~~
dragontamer
I'm not sure about that.

At $499, the i9-9900k is almost competing against the 12-core Threadripper
2920x ($649, 12-core/24-threads, 4.4GHz clock, 60 PCIe lanes, quad-memory
channels).

I think most people will find more use out of +4 cores (granted, on a NUMA
platform) than higher clocks. Cores for compiling code, rendering, video
editing, etc. etc.

Pretty much only gamers want +Clock speeds, and more and more games actually
use all-cores these days (Doom, Battlefield, etc. etc.)

\-----------

That's the thing. The i9-9900k isn't even a "high end chip" anymore. Its at
best, "highest of the mid-range" since the HEDT category (AMD Threadripper, or
Intel-X) has been invented.

Once you start getting into 8-cores/16 threads, I start to worry about dual-
memory channels and 16x PCIe lanes + 4GB/s DMI to the Southbridge. Its getting
harder and harder to "feed the beast". A more balanced HEDT system (like
Threadripper's quad-memory channels + 60PCIe lanes) just makes more sense.

~~~
Ooveex2C
> Pretty much only gamers want +Clock speeds

I wish. We use a commercial path-tracer that scales very well to many cores,
GPUs and entire clusters when it's chewing away at a single fixed scene or
animation.

But in interactive mode many scene modifications are bottlenecked on a single
or few threads and locks until it gets back into the highly optimized
rendering code paths. So a lot of work goes into quickly shutting down as many
background threads as possible to benefit from high turbo-boost clocks on Xeon
Gold processors so the user doesn't have to wait long and then ramp them back
up when it's just rendering the fixed scene.

~~~
zoggenhoff
Agreed. Games aren't the only thing people do with lots of cores / HEDT. Give
me a 128 core machine and I'll happily keep them busy all day with work. No
need for a heater either.

------
TwoNineA
Preorders for the 9900K are 578$ USD.

For that you can get a Ryzen 2700X, a nice motherboard and a 256GB SSD.
Performance delta shouldn't be more than 15% deficit for the Ryzen for a few
specific games.

~~~
vinw
Keep seeing this 'delta' word. From context it is basically like 'difference'.
But is there a delta between difference and delta?

~~~
berbec
Delta makes you sound smarter.

~~~
gameswithgo
it is less typing, and less data. save the planet.

~~~
berbec
'Δ' must be a sign of Greenpeace membership, then.

~~~
marvin
Only if you've memorized how to type it. If you have to Google it beforehand,
it serves as cheap signaling at best and an environmental transgression at
worst.

------
ColanR
Decent summary:

> I don’t have too much of an issue with Intel commissioning the report
> itself, and the Principled Technologies report is very transparent as they
> clearly state how they tested the games and configured the hardware. The
> results and testing methods are heavily biased, but they haven’t attempted
> to hide their dodgy methods. You can dig into the specs and find all the
> details, it’s still dodgy but it’s a paid report, so it’s somewhat expected.

~~~
c2h5oh
I disagree.

Vast majority of buyers and sadly huge chunk of tech press won't be able to
tell looking at the settings if our how much was benchmark skewed in Intel's
favor.

And it plenty skewed.

------
fromthestart
Further reason to never purchase Intel again.

~~~
holtalanm
yeah. I went Ryzen for my new laptop. 100% happy with it, and this kind of
crap from Intel just solidifies my decision.

~~~
SteveNuts
Which Laptop?

~~~
unethical_ban
I got (off some sweet discounts that expired over the weekend) a Thinkpad E485
with 128GB M.2, 16GB RAM, and 2700U Ryzen, 1080p screen for $720 shipped.

------
Symmetry
I had had some hope that with a new CEO at the helm this behavior would stop.

~~~
craftyguy
There is no new CEO, only an interim/acting CEO.

------
MacThe3rd
Interesting follow-up : Interview w/ Principled Technologies on Intel Testing
(9900K) - [https://youtu.be/qzshhrIj2EY](https://youtu.be/qzshhrIj2EY)

------
gcb0
intel is failing benchmarks, security vulns all around. Nvidia is failing to
deliver price/perf they promised years ago. amd is the opposite of both and
it's stock continues to dive. go figure.

~~~
swarnie_
> amd is the opposite of both and it's stock continues to dive. go figure.

AMDs share price is up almost 1400% in 2 years, what on earth are you on
about?

~~~
SteveNuts
It's currently on a bit of a slump after some financial firms lowered their
targets of the stock price.

I have a feeling it'll make its way back up though.

~~~
swarnie_
I think when you outpace the general index by multiple factors over years a
small pull back is not only expected, its healthy.

