
AMD unveils world's most powerful desktop CPUs - kmod
https://www.zdnet.com/article/amd-unveils-worlds-most-powerful-desktop-cpus/
======
kogepathic
> AMD has released a salvo by unveiled what are the world's most powerful
> desktop processors

Is it too much to ask that tech journalists proof read their articles before
hitting publish?

> AMD unveils world's most powerful desktop CPUs

This article contains zero benchmarks or anything other than the core count
and TDP to substantiate the claim that these are the "world's most powerful
desktop CPUs"

Can we please stop giving ZDNet impressions for this tripe?

The Anandtech article from 4 days ago on the Threadripper announcement at
least included AMD's slide deck. [1]

[1] [https://www.anandtech.com/show/15062/amds-2019-fall-
update](https://www.anandtech.com/show/15062/amds-2019-fall-update)

~~~
mkl
Discussion of the Anandtech article:
[https://news.ycombinator.com/item?id=21473235](https://news.ycombinator.com/item?id=21473235)

------
cromwellian
It's amazing how AMD, which once seemed a company on their knees, now seems to
be kicking Intel's butt, and I find myself worrying about Intel now. Why do
they seem to be failing on so many projects? Itanium, Larabee, XScale, the
Modem Division? Why is one of the world's largest and most successful CPU
companies noncompetitive now on pretty much everything, except their mainstay
x86 processor, where they seem to be stalling.

Is there something wrong with Intel management/culture that's allowing other
firms to run rings around them?

~~~
lm28469
> which once seemed a company on their knees

Weren't AMD cpus a good alternative to intel for most consumers ? If I
remember correctly athlon, phenom, FX were pretty good for the price. Or are
you talking about earlier times ?

edit: "good alternative" perf/price wise, yes intel used to have the advantage
in raw performance but their pricing made AMD cpus look very interesting for
most consumers.

~~~
pmjordan
Phenom got soundly beaten by the first few Core i generations (Nehalem, Sandy
Bridge) in terms of both peak performance and power efficiency. Bulldozer (The
basis for AMD's FX and "A" series CPUs) was mostly a misstep that was doomed
to feed the very low end of the market. The only truly successful devices
based on it were Xbox One and PS4. (Incidentally, those processors used
"Jaguar" cores which evolved from Bulldozer but actually didn't inherit the
shared execution units (CMT) that were Bulldozer's headline feature that
turned out not to work as well as intended.)

So between about 2010 and the introduction of Ryzen in 2017, there really was
no alternative to Intel if you were building a PC that was either fast or
energy efficient. In terms of performance, the 1000 & 2000 series Ryzens
roughly attained parity with Intel, while the 3000 series now pulls ahead in
many multithreaded and some singlethreaded disciplines, and definitely has
better performance-per-watt under load.

Ryzen still draws significantly more power than Intel's CPUs when idle though
- the PCIe 4.0 transition has counteracted the improvements from the 7nm
shrink in this regard, and the APU variants lagging the GPU-less SKUs by
almost a year in terms of process and microarchitecture means you also have to
power budget for a discrete GPU, which adds a few idle watts.

(The most power efficient at idle Ryzen build I'm aware of is the Asrock
DeskMini A300, which uses _no_ chipset, combined with a low end APU - this
draws about 7W when idle. In terms of performance and expandability, this
system isn't any better than a NUC, which draws somewhat less power when idle.
You can't buy a full size A300/X300 motherboard though, for some reason, even
though this would actually be quite an interesting setup.)

~~~
1996
About the asrock mini, have bios issues been solved?

[https://www.newegg.com/Product/SingleProductReview?item=N82E...](https://www.newegg.com/Product/SingleProductReview?item=N82E16856158064&parentCount=1&TabType=1)

~~~
pmjordan
That link doesn't work for me (perhaps geo-IP gated?) - what issue are we
talking about?

The highly reputable c't Magazine used it in one of their recommended PC
builds recently. They tend to be very thorough and fussy about issues, so I
suspect they didn't run into whatever issue you're talking about. I believe
they did mention that you need to watch out for BIOS support for the 3000
series CPUs (and ask the seller to update it if necessary and you don't have
an older AM4 APU handy), as with any motherboards that predate release of a
particular CPU you're trying to use.

------
tomtomtom777
Please promote the guy who came up with the name "threadripper".

~~~
jiggawatts
Sounds like something a true-blue Aussie bloke would come up with.

~~~
HNLurker2
I only give negative feedback

------
jotm
280 Watts TDP! Wow, I think that's a new record for desktop CPUs, although
GPUs had reached that a while ago. I wonder how well air coolers work on them,
or do they need water cooling for maximum performance? Impressive nonetheless.

~~~
londons_explore
I think TDP is becoming less and less of a comparable figure. All designs
could use far more power in specific worst case corner cases (eg. a very
specific instruction stream that causes every functional unit to be in use
every cycle, and all data paths to toggle logic state every cycle).

Yet in the typical use case (desktop PC, most cores idle most of the time),
power consumption will be far lower.

What really matters is the graph of "how much computation can I get out of
this thing in 10ms bursts, 1 second bursts, and 10 minute bursts?", with a
specific cooling setup.

~~~
pizza234
It's correct that TDP is getting more relative¹, but it's not correct that
we're talking of corner cases. Anandtech locked an i9-9900k to 95W, and the
loss was considerable (between 8 and 28%)².

¹: it needs to be noted that in the last years, Intel's TDP is getting "much
more relative" than AMD's ²: [https://www.anandtech.com/show/13591/the-intel-
core-i9-9900k...](https://www.anandtech.com/show/13591/the-intel-
core-i9-9900k-at-95w-fixing-the-power-for-sff/9)

~~~
pingyong
Intel TDP always had the same meaning: "If your cooler can dissipate TDP, the
CPU will run all-core loads at base clock speed, or a single-core loads at
boost-clock speed." \- Pretty sure that statement has always been true, within
+- 100 MHz depending on the specific workload. The distance between base and
boost clocks has just increased over the years.

------
Beltiras
It's finally happened. I don't foresee a time where I need more CPU power in
my desktop. I could use more memory, faster (and larger) SSDs, faster GPUs
with more TCs (althou I'm actually good for that too mostly), more bulk data
storage on platter disks but honestly I'm good for CPU for all my needs.
Here's why: Most of the cores on my Ryzen 7 are idle most of the time. There
are intense periods where I need to compile stuff where they max out but these
are relatively short. Cup-of-coffee short. Any gaming activity seems to use no
more than 8 cores max. Should I ever need more CPU I don't need an order of
magnitude more, I need orders of magnitude more. Those are easily procured
through AWS or Azure or some other cloud provider. Same with GPUs. I might
make a training run for a NN on my desktop to hypothesis test something that I
would then run for longer hours on cloud hardware. It's never going to be
cost-efficient for me to scale up with any sort of ownership of the hardware.
Of course I'm going to buy the new monster at some point but it will be due to
hardware failure, not due to me needing the new shiny.

~~~
ebg13
> _Any gaming activity seems to use no more than 8 cores max_

Large numbers of cores have never been useful for gaming, but overall thread
performance has. Right now the major bottleneck for high end gaming is GPUs
because display pixel counts have gone through the roof with high frame rate
VR and 5k monitors, but keep in mind that the current most powerful graphics
card for games (Nvidia 2080 Ti) is over a year old, still costs over $1000,
and is not about to be replaced yet because, just like Intel, NVidia hasn't
had competition at the top in a long time. If AMD can bring this success to
the graphics space, maybe that will change too.

~~~
pmjordan
AMD's Navi is looking to be competing quite well with NVIDIA in the mid-
range/enthusiast space, and shortly in the mid-to-low-end segment with the
imminent RX5500.

I suspect it won't be long until the empire strikes back in this case though:
NVIDIA also uses TSMC for fabbing its GPUs, so they too will soon be shipping
7nm GPUs, with all the intrinsic advantages of that node. Of course, _pricing_
of those new products will hopefully be strongly influenced by AMD's
resurgence. (And, supposedly, Intels re-entry into the GPU market, but I
remain skeptical this will make much of a dent for a few years yet.)

------
uuioperter
I’ve been reading that many of the Ryzen and Threadripper CPUs support ECC
ram, but motherboards support is shaky at best. Worst is that some
motherboards will boot with ECC but don’t actually use it.

Does anyone have any concrete experience?

I wad hoping to build a workstation with ECC memory, but it appears only the
EYPC cpus have certified support for ECC.

The Xeon W appears to be the most cost effective for guaranteed ECC memory.

~~~
kilo_bravo_3
I have an ASRock X399 Professional and a TR-1950X.

64GB DDR4-2666 ECC UDIMM runs perfectly on it, and the kernel reports ECC
support.

Day-one memory support was very, pathetically, bad with Threadripper, but it
has gotten better. It improved even more with Threadripper 2000.

If Ryzen 3000 is any guide, the gaping memory compatibility gulf between Intel
and AMD will probably be closed to a slight gap by the time TR 3000 releases.

As far as I'm aware, every X399 motherboard vendor supports and warranties ECC
support on the platform, but only if you use memory on their QVL at its rated
speeds.

sTRX4 will likely be no different.

As far as cost-effectiveness, a W-3175X ($2,978.80) and Dominus Extreme
($1,868.74) is indeed $552.45 less expensive than a Supermicro MBD-H11SSL-NC
with EPYC 7702P ($5399.99), but the the 7702P is about 37% faster multi-
threaded, and about 13% faster single-threaded.

------
loser777
It seems they have left the xx90X model number unused for now but the 32-core
has been accounted for...

~~~
alecmg
Not a huge secret that they have a 64-core Threadripper 3990x ready to go.

But also no hurry to release it, as even the 24 core wipes the floor with
anything Intel has to offer.

~~~
LoSboccacc
> wipes the floor with anything Intel has to offer

* in synthetic benchmarks that uses down to the last core

edit: my bad I mistook this for a technical audience

~~~
pdimitar
Single-core CPU performance is at a peak. There aren't any revolutionary
increases expected there anytime soon.

Milticore performance gets more and more important with time.

~~~
LoSboccacc
> Milticore performance gets more and more important with time.

no? above 2-4 core it's just marketing hype like it was ten years ago already
and today if you need parallel processing you have vastly powerful resources
to tap.

sure this amd cpu might be faster in 3d rendering or video encoding or other
task that use 4+ cores and that might be useful if you truly can't move them
onto a gpu but truth is at the top end many-cores architectures are getting
squeezed out by streaming processors and the workloads that have a hard
dependency on many core single node general processor are rarer by the day.

~~~
pdimitar
Threadripper is proven to do extremely well in parallel compilation benchmarks
-- the Linux kernel build process being the chief example. I've seen the
numbers on at least 4 sites (Phoronix included).

It's not only about 3D. We're talking everyday programmer productivity,
partially achieved by less waiting around when compiling your projects.

~~~
LoSboccacc
> everyday programmer productivity

> the Linux kernel build process

~~~
joaobeno
Yea, what is this BS Linux kernel build process that is a known metric that
everyone can test against? I compile projects dozens of times a day at my work
and never once I compiled the linux kernel...

~~~
pdimitar
It's an extreme case to demonstrate if a CPU's multicore performance is
clearly better than previous generations.

For what it's worth, I migrated from a consumer-grade i7 CPU to a workstation-
grade Xeon and the differences in compile speeds in my everyday projects (less
than 400 files) are still significant and very noticeably in favour of the
Xeon.

Linux kernel compilation is just a benchmark that multiplies such differences
between CPUs. Thus the buyers can -- hopefully -- make a more informed
decision.

~~~
LoSboccacc
weird, I've a 9000+ files maven monster and intellij incremental build means
I've never have to wait for compilation during normal operations, and this is
with a meager i7 8550u

~~~
pdimitar
Well, it's an incremental build. I am talking occasionally changing
dependencies in a project written in a dynamic-but-compiled language, which
leads to full rebuilds. The difference was big.

It also has tooling for "watching" test directories and auto re-running tests
when certain files are changed -- which utilises incremental compilation under
the hood and there indeed there is no visible compilation performance
difference.

