
AMD Ryzen 9 3950X is the fastest processor on Geekbench - areejs
https://www.techquila.co.in/amd-ryzen-9-3950x-vs-intels-18-core-i9/
======
tmd83
If that's really true, a 16 core AMD having a higher performance than a 18
core intel processor at twice the price, that's a fabulous news for all
consumers. Hopefully that will stop both intel from setting absurd price for
mid end processor and generally push the industry forward.

~~~
H8crilA
What really matters (for people that do the CAPEX and OPEX math on their
assets; not gamers) is the performance/power ratio. Without this I don't see
AMD eating much of Intel's lunch (35B vs 208B market cap).

~~~
sbov
The Zen 2 16 core chip is 105 watt TDP. The chips its wiping the floor with
are 165 TDP. TDP doesn't necessarily correlate with real world usage, but
benchmarks show that AMD is much better at their chips running closer to TDP
than Intel chips are, so the gap is probably actually wider. The strength of
Intel chips is being able to pump a lot of power through them to hit higher
clock rates.

It sounds like you're saying performance/power is a benefit for Intel,
possibly based upon the history of AMD chips, but that line of thought has
been wrong since the Ryzen architecture.

~~~
jsgo
Not Ryzen related, but seems you're pretty up to speed with AMD products. Does
that include Radeon as well? I have a MBP and I am considering a Radeon VII
for my external GPU (currently GTX 1080 but only usable in Windows. Thanks
Mojave). My main concern though is thermals and noise. Does it perform on par
with Nvidia there or little bit worse or considerably so? Power draw I'm not
that concerned with.

~~~
xigency
In the most recent generations, Radeon has run hotter than nVidia cards for
similar performance. Seems to be true of the Radeon VII as well [0].

[0] [https://www.theverge.com/2019/2/11/18194190/amds-radeon-
vii-...](https://www.theverge.com/2019/2/11/18194190/amds-radeon-vii-is-a-hot-
loud-powerful-answer-to-nvidia)

~~~
jsgo
thanks for that. That's a huge bummer. Really wish Apple wouldn't force the
Metal issue with Nvidia. Yeah, it'd be nice and all, but as a user, I'm fine
with the various scripts I have to run after macOS updates to get the card
running again but they just nixed that outright. Oh well, hopefully AMD can
solve the fan problems or Nvidia and Apple can work something out, either or.

~~~
wlesieutre
The Radeon 5700 and 5700 XT are supposed to be competitive with the RTX 2060
and RTX 2070 at slightly lower prices. Only reference cards right now, but
things might be looking up once OEMs have a chance to put better coolers on
instead of AMD's reference blower.

I'm planning to hold out for next gen when they get ray tracing hardware to be
a bit more future proof (my GTX 970's not dead yet), but since I'm thinking of
trading my Wintendo out for a Mac + eGPU setup it's nice to see that AMD could
actually be a good GPU option now.

Those were just announced this week, so keep an eye out for 3rd party
benchmarks soon.

~~~
tracker1
Will probably pull the trigger on a Radeon VII myself, only because of the
better Linux drivers, and possibility of hackintosh usage. At least for my
current system, I did a mid-cycle upgrade for the GPU (GTX 1080) and added
NVME a couple years ago. Still running 4790K on 32gb ram, and does great for
most stuff, but not so much for encoding or dev work (couple dbs and services
in background).

------
mrb
Surprisingly no one noticed or reported that the memory is _heavily_
overclocked by +29% in this specific benchmark. Here is the direct link to the
detailed results:
[http://browser.geekbench.com/v4/cpu/13495867](http://browser.geekbench.com/v4/cpu/13495867)

Officially Ryzen 9 3950X supports up to DDR4-3200 (1600 MHz) according to the
published specs [https://www.amd.com/en/products/cpu/amd-
ryzen-9-3950x](https://www.amd.com/en/products/cpu/amd-ryzen-9-3950x) however
in this benchmark the memory was overclocked to 2063 MHz:

    
    
      Memory: 32768 MB DDR4 SDRAM 2063MHz
    

Memory overclocking heavily impacts Geekbench multi-core scores. For example
the old Threadripper 2950X sees a score boosted by +18% (39580 vs 46908) with
a +9% overclock (1466 vs 1600 MHz):
[http://browser.geekbench.com/v4/cpu/compare/13400527?baselin...](http://browser.geekbench.com/v4/cpu/compare/13400527?baseline=12434177)
Although to be honest comparing random Geekbench scores in their database is
not exact science because too few system details are reported (for example we
don't know if the user systems are running dual or quad-channel DDR4) and we
don't know what other hardware mods users make.

~~~
mappu
_> Officially Ryzen 9 3950X supports up to DDR4-3200 (1600 MHz)_

No, it supports "4200+ with ease, 5133 demonstrated".

From official slides [https://www.anandtech.com/show/14525/amd-
zen-2-microarchitec...](https://www.anandtech.com/show/14525/amd-
zen-2-microarchitecture-analysis-ryzen-3000-and-epyc-rome/11)

~~~
mrb
While AMD claim it can be overclocked to 4200 or 5133, it doesn't invalidate
my claim that officially it is spec'd for DDR4-3200 according to the product
page: [https://www.amd.com/en/products/cpu/amd-
ryzen-9-3950x](https://www.amd.com/en/products/cpu/amd-ryzen-9-3950x)

Note I am not playing down the 3950X's performance. It is overall a processor
superior to Intel's counterparts in most aspects.

~~~
shmerl
I wonder if G.Skill will release new RAM targeted for Zen 2. Their 3200 MHz
FlareX works pretty well with Zen+.

~~~
winkeltripel
It's G.Skill. They'll release ~100 new SKUs for it, as long as they can get
enough well-binned modules from samsung.

------
fortytw2
I think the bigger news here is that it almost _doubles_ the score of the
similar AMD Threadripper 2950X (also 16c/32t).

34650 to 61072 in a generation is no joke, while being both a far smaller,
much lower power part.

~~~
lostmsu
Don't you think it means the number is probably fake?

Before the release and subsequent independent testing the trust in any
exceptional results should be very low.

~~~
kllrnohj
I'd be more likely to believe Geekbench is just a terrible, broken benchmark
than anything else.

An Epyc 7501 (32c/64t) apparently only gets 17k multicore score on geekbench
under windows:
[https://browser.geekbench.com/processors/2141](https://browser.geekbench.com/processors/2141)

Which is hilariously wrong. And if you think that's some quirk of Epyc, well,
same CPU gets 65k when run under Linux:
[https://browser.geekbench.com/v4/cpu/10782563](https://browser.geekbench.com/v4/cpu/10782563)
So clearly there's a software issue in play. Maybe this is related to the new
Windows scheduler change. Maybe geekbench just has some pathologically bad
behavior. Who knows.

So yes we should wait for release & independent testing before getting too
excited, even if that's just so we get numbers from something other than
geekbench.

~~~
jfpoole
Geekbench exposes some strange behaviour around the memory allocator under
Windows. On systems with more than 8 cores Geekbench spends a significant
chunk of time in the memory allocator due to contention. This issue (at least
to this degree) isn't present on Linux, so that's why Epyc scores are much
higher on Linux than Windows.

------
alfredxing
This might be an unfair comparison — the AMD numbers are from a single
benchmark, and the article is comparing this against the aggregated scores of
the i9-9980XE. A few i9-9980XE multi-core scores on Geekbench reach higher
than 60k as well, with the highest being 77554 multi-core.

~~~
dmix
[https://browser.geekbench.com/v4/cpu/search?dir=desc&q=i9-99...](https://browser.geekbench.com/v4/cpu/search?dir=desc&q=i9-9980XE&sort=multicore_score)

Looks like a couple hit 70k+ at 3.00 GHz base [1].

[1]
[https://browser.geekbench.com/v4/cpu/13419502](https://browser.geekbench.com/v4/cpu/13419502)

~~~
Arie
Geekbench just lists the stock speeds for the chip, not the actual speeds used
for the benchmark.

~~~
dmix
Yep I think that's clear in the Geekbench interface.

------
3JPLW
The Ryzens have an absurdly long branch prediction history that make them much
better at repetitive tasks than random real-world workflows. I wonder how much
this is effectively "gaming" the Geekbench suite.

[https://discourse.julialang.org/t/psa-microbenchmarks-
rememb...](https://discourse.julialang.org/t/psa-microbenchmarks-remember-
branch-history/17436/9?u=mbauman)

~~~
chipguy
That's not the impression I got from that thread. They seem to agree that this
is bad for benchmarking, but remain undecided on whether that's good or bad
for real-world processing.

It depends on the work. So as always benchmark suites are to be taken with a
grain of salt. More specific benchmarks, such as compiling a standard set of
real software packages, can give a clearer picture of performance for those
more specific use cases.

Until we see more specific data on how these chips perform for certain tasks,
this is just FUD.

~~~
sbov
> More specific benchmarks, such as compiling a standard set of real software
> packages, can give a clearer picture of performance for those more specific
> use cases.

Is there a good place to go for this? I've tried to find software development
focused benchmarks before, but I've come up mostly empty.

~~~
localhost
Phoronix is a good place to go for compilation benchmarks -
[https://github.com/phoronix-test-suite/phoronix-test-
suite](https://github.com/phoronix-test-suite/phoronix-test-suite)

~~~
chipguy
The link I posted in a sibling comment is a more direct way to get to the
results of that suite.

------
bstar77
Bravo, everyone on the PC side has great options now, but I feel for Mac
"Professionals". Sad they just got straddled with the horrendous over priced
and under performing Xeon platform. It boggles my mind why Apple would release
a $6k model that will get trounced by these chips for a fraction of the price.
I know the expand-ability is what you are buying into, but I imagine 90% of
Mac Pro customers could care less about terrabytes of memory or a video
solution that improves current vram limits. Add to all of that the gimped
performance you are going to get on the Intel parts with the latest security
patches.

~~~
thirdsun
Apple must have had an interest in going with AMD - the fact that they didn't
makes me think that getting macOS ready as a productive, reliable OS on AMD
CPUs isn't as trivial as we might assume. Also, is Thunderbolt even an option
with AMD?

~~~
bstar77
I think you nailed it... Thunderbolt is not possible on AMD.

~~~
NightlyDev
It is...

------
xedarius
There’s an increasing groundswell of trust in AMD and their Ryzen chip. It’s
great news, I’ve owned one for two years now and it’s fab.

The new XBox will feature a custom Ryzen of some form. Who’s next, Apple?

~~~
Nexxxeh
>it’s fab.

Given that it's AMD, shouldn't that be "it's fabless"?

I've got a mix of Intel and AMD, and have had no loyalty back to when I
replaced my Pentium 75 with a pre-unlocked AMD Duron from OcUK.

I'm so glad to see AMD not only raise its game exponentially, but also force
Intel to compete. It's good for everyone.

My next purchase will probably be a Ryzen 5 2600, because the price drop ahead
of the 3xxx has made them ridiculous value for money.

Definitely a good time to be a PC gamer.

Slightly frustrating that the integrated graphics 3x00G chips are basically
Ryzen 2xxx chips though. I hope the g-range gets a refresh with proper Zen
2-based chips shortly.

WRT "who next", did you see the Chinese AMD custom Ryzen+Vega APU console last
year, the Subor Z-Plus, with 8GB GDDR5 as shared system and graphics memory?

~~~
tracker1
Totally agreed on the 3xxx(G|H) parts not being Zen 2, and really misleading
on that front. Though they're mostly underclocked with lots of room for boost,
so competitive to Intel's. Also the onboard vega gfx almost doesn't suck by
comparison.

------
gigatexal
That’s a lot of performance for 749 USD. Building a new workstation / gaming
rig in about 18 months time so I will be spoiled for choice by then especially
given the used market as these will be old hat by then.

------
bluedino
Would rather see numbers on Cinebench, Video encoding, 7-zip...

------
kitchenkarma
It looks like single core performance is still worse than i9 9900K. I wonder
how this could look like when overclocked? Sadly my workflow prioritises fast
core over multiple cores - audio production. This workflow cannot be made
parallel as one plugin depends on the output of another. If plugins can't keep
up with filling the buffer you get stuttering. Single core limits you how much
processing you can have on a single audio track and multiple cores how many
tracks of that processing you can get. It looks like I wouldn't be able to run
my chain in realtime on this new AMD even if it had 100 cores.

~~~
prennert
Why would it not be possible to use multiple cores? Even though the plugins
depend on the output of the previous one, they could sit on different cores,
passing their output on from core to core. Even though that would not be
parallel, being distributed, it could be faster (in some cases it might not).

~~~
kitchenkarma
These cannot run at the same time as the output of one feeds into another one.
Data travelling from one core to another could mean additional performance
loss. Some plugins use multiple cores if whatever they calculate can be
parallelised, but still the quicker it can be done the more plugins you can
run in your chain.

~~~
saltcured
This is silly. A bottleneck for audio processing is a particular product's
flaw, not an intrinsic challenge of audio. A modern machine capable of doing
interactive, high-resolution graphics rendering or high-definition movie
rendering can do a stupendous amount of audio processing without even trying.

The data rates for real-time audio are so much smaller than modern memory
system capabilities that we can almost ignore them. A 192 kHz, 24-bit,
6-channel audio program is less than 3 MB/s, thousands of times slower than a
modern workstation CPU and memory system can muster.

The stack of audio filters you describe are a natural fit for pipelined
software architectures, and such architectures are trivially mapped to
pipelined parallel processing models. Whatever buffer granularity one might
make in a single-threaded, synchronous audio API to relay data through a
sequence of filter functions can be distributed into an asynchronous pipeline,
with workers on separate cores looping over a stream of input sample buffers.
It just takes an SMP-style queue abstraction to handle the buffer relay
between the workers, while each can invoke a typical synchronous function.
Also, because these sorts of filters usually have a very consistent cost
regardless of the input signal, they could be benchmarked on a given machine
to plan an efficient allocation of pipeline stages to CPU cores (or to predict
that the pipeline is too expensive for the given machine).

Finally, audio was a domain motivating DSPs and SIMD processing long before
graphics. An awful lot of audio effects ought to be easily written for a high
performance SIMD processing platform, just like custom shaders in a modern
video game are mapped to GPUs by the graphics driver.

~~~
kitchenkarma
This is a simple fact of life and downvoting isn't going to change it. Plugin
cannot start processing before it gets data from previous plugin (sure it can
do some tricks like pre-computing coefficients for filters etc). How are you
going to get around it? What's happening within a plugin of course can be
parallelised, but other than that, the processing is inherently serial. If a
computing a filter takes X time and a length of the buffer is Y you can only
compute so many filters (Y/X) before it starts stuttering. You can spread that
across different cores, but these filters cannot be processed at the same
time, because each needs the output of the previous one.

~~~
saltcured
Pipelining means that each stage further down the pipeline is processing an
"earlier" time window than the previous stage. They don't run concurrently to
speed up one buffer, but they run concurrently to sustain the throughput while
having more active filters.

For N stages, instead of having each filter run at 1/N duty cycle, waiting for
their turn to run, they can all remain mostly active. As soon as they are done
with one buffer, the next one from the previous pipeline stage is likely to be
waiting for them. This can actually lower total latency and avoid dropouts
because the next buffer can begin processing in the first stage as soon as the
previous buffer has been released to the second stage.

~~~
kitchenkarma
I think this is one of the most misunderstood problem these days. Your idea
could work if the process wasn't real-time. In real-time audio production
scenario you cannot predict what event is going to happen so you cannot simply
just process next buffer, because you won't know in advance what is needed to
be processed. At the moment these pipelines are as advanced as they can be and
there is simply no way around being able to process X filters in Y amount of
time to work in real-time. If you think you have an idea that could work, you
could solve one of the biggest problems music producers face that is not yet
solved.

~~~
saltcured
Something like a filter chain for an audio stream is truly the textbook
candidate for pipelined concurrency. Conceptually, there are no events or
conditional branching. Just a methodical iteration over input samples, in
order, producing output samples also in order.

Whatever you can calculate sequentially like:

    
    
      while True:
         buf0 = input.recv()
         buf1 = filter1(buf0)
         buf2 = filter2(buf1)
         buf3 = filter3(buf2)
         output.send(buf3)
    

can instead be written as a set of concurrent worker loops.

Each worker is dedicated to running a specific filter function, so its
internal state remains local to that one worker. Only the intermediate sample
buffers get relayed between the workers, usually via a low-latency
asynchronous queue or similar data structure. If a particular filter function
is a little slow, the next stage will simply block on its input receive step
until the slow stage can perform the send.

(Edited to try to fix pseudo code block)

~~~
kitchenkarma
This is how it is typically being done. This is not a problem. Problem is that
being concurrent, end to end this process is serial, so you can't process any
element of this pipeline in parallel. You can run only so many of those until
you run out of time to fill the buffer. I think it could be helpful for you to
watch this video:
[https://www.youtube.com/watch?v=cN_DpYBzKso](https://www.youtube.com/watch?v=cN_DpYBzKso)

~~~
saltcured
Sorry for the late reply. We have to consider two kinds of latency separately.

A completely sequential process would have a full end-to-end pipeline delay
between each audio frame. The first stage cannot start processing a frame
until the last stage has finished processing the previous frame. In a real-
time system, this turns into a severe throughput limit, as you start to have
input/output overflow/underflow. The pipeline throughput is the reciprocal of
the end-to-end frame delay.

But, concurrent execution of the pipeline on multiple CPU cores means that you
can have many frames in flight at once. The total end-to-end delay is still
the sum of the per-stage delays, but the inter-frame delay can be minimized.
As soon as a stage has completed one frame, it can start work on the next in
the sequence. In such a pipeline, the throughput is the reciprocal of the
inter-frame delay for the slowest stage rather than of the total end-to-end
delay. The real-time system can scale the number of pipeline stages with the
number of CPU cores without encountering input/output overflow/underflow.

Because frame drops were mentioned early on in this discussion, I (and
probably others who responded) assumed we were talking about this pipeline
throughput issue. But, if your real-time application requires feedback of the
results back into a live process, i.e. mixing the audio stream back into the
listening environment for performers or audience, then I understand you also
have a concern about end-to-end latency and not just buffer throughput.

One approach is to reduce the frame size, so that each frame processes more
quickly at each stage. Practically speaking, each frame will be a little less
efficient as there is more control-flow overhead to dispatch it. But, you can
exploit the concurrent pipeline execution to absorb this added overhead. The
smaller frames will get through the pipeline quickly, and the total pipeline
throughput will still be high. Of course, there will be some practical limit
to how small a frame gets before you no longer see an improvement.

Things like SIMD optimization are also a good way to increase the speed of an
individual stage. Many signal-processing algorithms can use vectorized math
for a frame of sequential samples, to increase the number of samples processed
per cycle and to optimize the memory access patterns too. These modern cores
keep increasing their SIMD widths and effective ops/cycle even when their
regular clock rate isn't much higher. This is a lot of power left on the table
if you do not write SIMD code.

And, as others have mentioned in the discussion, if your filters do not
involve cross-channel effects, you can parallelize the pipelines for different
channels. This also reduces the size of each frame and hence its processing
cost, so the end-to-end delay drops while the throughput remains high with
different channels being processed in truly parallel fashion.

Even a GPU-based solution could help. What is needed here is a software
architecture where you run the entire pipeline on the GPU to take advantage of
the very high speed RAM and cache zones within the GPU. You only transfer
input from host to GPU and final results back from GPU to host. You will use
only a very small subset of the GPU's processing units, compared to a graphics
workload, but you can benefit from very fast buffers for managing filter state
as well as the same kind of SIMD primitives to rip through a frame of samples.
I realize that this would be difficult for a multi-vendor product with third-
party plugins, etc.

------
shmerl
How does 16 core Ryzen 9 3950X have the same TDP as 12 core Ryzen 9 3900X
(105W)? It even has higher max boost frequency. Is it just because of lower
base frequency?

* [https://www.amd.com/en/products/cpu/amd-ryzen-9-3900x](https://www.amd.com/en/products/cpu/amd-ryzen-9-3900x)

* [https://www.amd.com/en/products/cpu/amd-ryzen-9-3950x](https://www.amd.com/en/products/cpu/amd-ryzen-9-3950x)

~~~
fabian2k
TDP doesn't tell you what the actual power consumption in practice will be. It
is defined in some weird ways (different between manufacturers), and generally
not intuitive. I would recommend to avoid trying to read too much into the
TDP, wait for actual measurements of power consumption.

My understanding is that typically the TDP is designed to fit to the base
clock of the processor, and doesn't necessarily include the amount of power
necessary to achieve the boost clocks.

~~~
paol
Anandtech do a deep dive on how Intel calculate TDP numbers [0]. It's
complicated (and completely different from AMD, so never try to compare
numbers).

[0] [https://www.anandtech.com/show/13544/why-intel-processors-
dr...](https://www.anandtech.com/show/13544/why-intel-processors-draw-more-
power-than-expected-tdp-turbo)

~~~
shmerl
Interesting, thanks! From that article, Intel's TDP is roughly equal to the
power draw during full load on base frequency. How does AMD define it then?

Also, what about GPUs TDP?

~~~
paol
I've read some in-depth analysis of AMD's calculation somewhere too, but I
forget where. I do remember that the TDP numbers on AMD are closer to maximum
power draw.

Never investigated GPUs. One way to find out would be to trawl Anandtech
reviews and collect TDP and measured power draw numbers, they always take
measurements.

------
truth_seeker
Powerful stuff !

In order to see optimum use of those many cores i wish to see.

1\. Most used legacy software libraries to incorporate concurrent/parallel
algorithms for both CPU and mixed (CPU +IO) load.

2\. Some inventive, compact and powerful heat sink design to be implemented in
laptop models.

------
pier25
Why do you think Apple hasn't moved to AMD cpus for its Mac products?

In the past I thought maybe the rumored move to ARM could be the reason, but
now with the new Mac Pro I doubt Apple will move to ARM except for some of its
laptops.

~~~
gnode
That Apple have married themselves to Thunderbolt (co-developed by Apple and
Intel) may have had something to do with it. Previously Thunderbolt was not
well supported on AMD platforms, as I understand it. This appears to be
changing though.

~~~
ComputerGuru
Thunderbolt has been an open standard (and now royalty free) for some time.
I'm sure if someone as big as Apple wanted to adopt AMD that was the only
blocker, it wouldn't be a blocker for long.

------
imagetic
Pretty stoked about what AMD is doing. Even if these benchmarks are inflated,
it's an amazing bang for your buck and you can build some really solid budget
machines. The next generation with PCI 4.0 looks extremely promising. I wish
they'd concentrate on pressuring motherboard companies to make more
professional non-server boards for the Ryzen 9 chips.

------
Keyframe
Does Ry(Zen) now support AVX-512 and thunderbolt?

It will be interesting to see if Jim Keller comes up with something for intel
too, now that he's there and no longer with AMD.

~~~
Jonnax
From what I remember Thunderbolt is Intel proprietary even though at launch
they said it will be open.

Does anyone know if that has changed?

~~~
eigenspace
Yes, thunderbolt 3 is open now. There's a few AMD motherboards out there right
now that support it but they're not common.

------
bitL
With 1903 update or without?

------
jammygit
I went with a ryzen for my newest desktop and it’s been great so far, I love
it.

Not only does it work well, but it fixed the issues I was having. I used to
have shutdown took 5-10 minutes due to some systemd nonsense, mysteriously
fixed with the new mobo and cpu. Definitely a plus to have it gone now.

Tested with Ubuntu and also windows, Keras and games.

------
xpuente
Perhaps we need data from "real" benchmarks, such as SPECCPU2017. Geekbench is
merely a toy compared with SpeCPU.

------
CoolGuySteve
Hopefully AMD will release details on the next ThreadRipper iteration before
the 3950X release in September.

If the top end ThreadRipper is just two of these glued together then there's
not really any other choice for a workstation build.

But if the next ThreadRipper goes further and has 4 core complexes on a single
MMU then it have insane performance.

~~~
undersuit
I think it all depends on if ThreadRipper gets a new socket. Matisse is 2
channel DDR, Threadripper is currently 4 channel, and Rome, the new server
platform, is supposed to be 8 channel.

Using the Rome platform unmodified means 8 channel matching DDR4 kits for
consumers. Re-using Matisse silicon glued together makes next-gen ThreadRipper
into a NUMA device. Maybe AMD has an I/O die just for the low volume
ThreadRipper in production. Hopefully, AMD will just harvest Rome I/O dies for
ThreadRipper and tease us with 32, 48, and 64 core models that work on the
existing TR4 socket.

------
klodolph
Traditionally AMD does NUMA differently from Intel. Would like to see a
comparison focusing on NUMA.

~~~
p_l
Ryżem branded cpus have single NUMA zone, threadripper have 2, EPYC have 2 per
socket

~~~
paol
This was true up until now, but the upcoming 3900X and 3950X have 2 separate
dies ("chiplets"). I assume this means it will have the same architecture as
the current TR 2920X and 2950X.

Edit: opencl post above says apparently not, it's a different memory
architecture.

~~~
horyzen
The 2 smaller chiplets are for core complex, the memory controller is on the
IO die(the other bigger one) that these 2 CCX share. So there is still one
single memory controller, and hence, not NUMA.

------
dosshell
And it looks like it has room for overclocking. Here to 5GHz and 65499 points
in Geekbench 4 (record for Core i9-7960X is 60991 points)

[https://m.youtube.com/watch?v=1woeitoCXsQ](https://m.youtube.com/watch?v=1woeitoCXsQ)

------
andy_ppp
Does anyone know how these CPUs compare with Spectre/Meltdown protections
enabled?

~~~
eigenspace
AMD processors have so far been immune to all but one attack and in the
previous gen processor the mitigation caused a 2% performance hit. This gen
has specific hardware for mitigating these side channel effects so the numbers
you're seeing are with the mitigations.

~~~
uponcoffee
I think the parent comment is asking if the patches have been applied to both
chips in the benchmark. Yes, AMD perf will be largely the same but comparison
between patched and unpatched will be different.

~~~
tracker1
Not sure about this benchmark, but the differences in the AMD presentations
have been Intel unpatched comparisons.

------
Aaronstotle
I want one, no real reason other than increasing my SETI@Home score, however
I'd probably do a lot more video encoding with that much horsepower.

------
postit
Wondering if it does makes bazel build faster?

~~~
solomatov
I used bazel on Threadripper and it's much faster there than on a standard
lower core machine.

~~~
postit
I might consider upgrade my workstation. My quadcore i7-4770 CPU @ 3.40GHz
isn't up to the task.

[https://i.imgur.com/Iz0SMvR.png](https://i.imgur.com/Iz0SMvR.png)

~~~
solomatov
After this post, I am thinking about replacing my TR with this beast :-)

~~~
tracker1
If you're already on a 16c Threadripper, it may be worth it... presuming the
shared IO/Memory controller carries over, it'll have much better throughput on
memory constrained workloads that bottleneck with NUMA.

------
masklinn
> So, as AMD’s new 16-core Zen 2 flagship has now been officially launched

Does that mean there's no 16c Threadripper planned?

~~~
vbezhenar
Why? Threadripper is for workstations, Ryzen for consumers. Core count does
not have anything with that. There was 8 core Ryzen along with 8 core
Threadripper in the past.

~~~
masklinn
> Why?

Because the bit I quoted calls the 3950X the Zen 2 flagship.

> Threadripper is for workstations

Threadripper is HEDT, it's not a strictly workstation part (unlike Xeon W).

~~~
vbezhenar
They promised new Threadrippers. I guess, it's flagship until Threadrippers
are revealed.

~~~
masklinn
Ok, thanks for the info.

------
Shorel
More interesting is how close they are in single core performance.

This is a very important battle to watch.

~~~
wyred
I'm out of the loop. Why is single core performance important, since no one (I
think) buys a multi-core CPU just to limit themselves to using only 1 core?

~~~
Shorel
What makes you think anyone limits their computer to run only one core? I'm
flabbergasted.

Many people choose the CPU that has the highest single core performance, for
both gaming or real-time multimedia processing.

Games are usually optimized for a single core, or a low number of cores, not
to use all 4 or 6 or 8 cores of a system. Therefore, for gaming single core
performance is still very important.

You can read some opinions about people still choosing Intel because of single
core performance in this very discussion.

Additionally, modern CPUs move running threads among cores to avoid a single
core from overheating. It's a strategy for thermal management.

~~~
wyred
Like I said, I'm out of the loop. Thanks for the explanation!

------
locusm
Whose your fave Youtube reviewer for CPU releases?

------
IloveHN84
Wow

------
fxprq
Single-core performance > Multi-core performance for most workloads. Maybe in
10 years it will be the opposite.

~~~
mfatica
That's not true.

~~~
fxprq
I wish it wasn't but I'm tired of seeing many processes locked at 12% CPU i.e.
one core. Granted I mainly use old software. If you recode video or play
modern games I assume it's better. But by now I'll keep judging the worth of a
CPU by its single-core performance.

~~~
unethical_ban
You just said something in general, then replied about "for you".

There are millions of PC gamers and video encoders and multi-workload users.
If you're running intense, non-multithreaded workload and need the absolute
best without caring about security (Spectre/Meltdown) or cost, go Intel.

~~~
hartator
I'll agree even for video games, single core performance matters a lot. Most
workload that can use multiple cores are on the GPU. Whereas the main event
loop will be single core constrained. Maybe AI. But Again AI should be on the
GPU or dedicated chip.

------
parentheses
Side-channel vulnerabilities routinely attack performance optimizations that
reveal information about the data and/or code.

How much of the difference is attributed performance lost to recently patched
side-channel vulnerabilities?

