
AMD Q4:16-core Ryzen 9 3950X, Threadripper Up To 32-Core 3970X - pella
https://www.anandtech.com/print/15062/amds-2019-fall-update
======
maerek
This is fantastic. I'm most interested in the eventual arrival of Zen 2-based
APUs for upgrading my tiny home server. Getting a (hopefully) 8 core, 16
thread part with integrated graphics at the end of 2020 would be a fantastic
value if prices stay similar to the current announcement (sub $200). Seeing
AMD continue to make incredible progress on their chips makes me very excited!

~~~
wil421
What are your use cases for the tiny home server and the APU? I built a
smallish FreeNas with an i3 earlier this year. It was before this year’s AMD
announcements. I like the i3 because it has ECC memory and can fit into some
supermicro server boards. IPMI makes it easy to setup over LAN and I don’t
have to ever plug in a monitor or keyboard. It would be nice to see more AMD
boards with it beside the AsRock X470D4U.

My desktop/plex server is due for an upgrade next year. Maybe the threadripper
price will go down.

~~~
derefr
> IPMI makes it easy to setup over LAN and I don’t have to ever plug in a
> monitor or keyboard.

Hardware BMCs have their place (e.g. low-overhead compute-cluster nodes, where
free cores = profit.)

But, for _most_ workloads—and especially _consumer_ workloads—there’s no
reason that the concept of a “Baseboard Management Controller” needs to be
instantiated as hardware; you can just as well set the system up with a
hypervisor OS (e.g. a minimal Linux KVM install; or an appliance-OS designed
for this, like VMWare’s ESXi), set your regular workload up as one VM guest
(and pass through to it all the nice hardware you have, like GPUs), and then
set up another “control plane” guest VM that exposes IPMI management of your
regular guest and of the hypervisor itself. As they say, “there’s no problem
that can’t be solved with another layer of indirection.” ;)

(I should note, this is exactly the setup you get _by default_ if you install
ESXi [hypervisor] + a free home license of vSphere Server Center [BMC-
equivalent appliance] onto a box. I was happily using this exact setup for
quite a while, though I eventually moved to Linux+KVM+Xen just because I
wanted the host to be able to create guest volumes from a thin-provisioned
storage pool and then serve them out to the guests over iSCSI, as if I had a
teeny-tiny SAN.)

Of course, this has only become a viable approach for IoT integrators very
recently, which is why we don’t see any IoT appliances (e.g. NASes) coming set
up this way from the manufacturer just yet. Until recently, your choices for
building IoT devices were microcontrollers at the low end; old ARM cores in
the middle; and Intel’s most “power efficient”, feature-stripped cores on the
high end. None of these were particularly suited to hosting virtualization.
But Ryzen is! While it may only be affordable to home-builders today, I expect
to see AMD chasing Intel up on its “power-efficient embedded profile” market
segment quite soon, with Ryzen-based, highly-cored, virtualization-capable
equivalents to the Intel Atom line being sold for cheap enough to get system
integrators excited.

~~~
wil421
FreeNAS does not recommend running in a VM and I’ve heard about problems with
iSCSI :-). I could easily pick up used Dell servers dual core E5 Xeons with
128gb of ecc RAM and whatever SATA/SAS controllers I want off Craigslist. ESXi
costs money and a yearly cost at that but I have played around with the trial
version.

But! The FreeNAS community is a bunch of grumpy sys admins. I’m considering
going down the Linux and ZFS route. I’d be able to do more with VMs (I feel
more comfortable in Linux vs FreeBSD). I’m building some IoT Pi’s to collect
data and have it a Linux box would be nice.

The UniFi USG handles DynaDNS and my VPN.

~~~
technofiend
Raspberry Pis haven't _quite_ got there yet but I'm hoping the next iteration
will have an NVME or SATA implementation. Although to be honest it doesn't
have to be the Pi. Any small board that'll run Linux, has at least gigabit
ethernet and a fast path to disk will do. At that point it'll be possible to
make a ceph cluster with one Pi per disk.

~~~
vetinari
For some time, I've been thinking of making a ceph cluster out of ODROID-HC1
or 2 ([https://www.hardkernel.com/shop/odroid-hc2-home-cloud-
two](https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two)).

~~~
technofiend
A few years go now Western Digital demonstrated an onboard controller with two
1 GB NICs and a mini linux distribution with a single Ceph OSD installed.
Unfortunately it never made it out of the lab. I would gladly pay a $50
premium per device for spinning rust to have that onboard. Perhaps the issue
is with NVME-connected devices that could be a much costlier device to build?
Or maybe there's no standard for housing network-connected storage devices in
a rack?

------
NicoJuicy
AMD appears in a trending section almost every other day.

How can OEM's still ignore AMD, I mean. It's obviously very popular. They have
the best offering and no one can match their price.

How long till Intel's monopoly will fall, because of consumer demand and
seemingly (almost) perfect long-term execution of AMD:

\- 2016 - 8,1% market share

\- 2019 - 18 % market share

According to Intel, they have free game untill 2021. My guess is that they
will have free game long after 2023 ( when 3nm architecture will be released
by TSMC)

I think the market share now is mostly because of DIY builders. Am I wrong?
The only serious AMD OEM offering i have seen was from Microsoft's Surface.

Side-note: AMD is mostly interesting for desktops for now ( -> battery life),
until next Q.

But how come OEM desktops lack an AMD alternative? Any articles/information on
OEM's partnerships with Intel?

PS. Please upvote a more ontopic comment concerning the product itselve. I
didn't want this to be the top comment.

~~~
psnosignaluk
It's been an interesting year for AMD in the DIY builder space. The every man
chip that seems to get recommended at every turn is the 3600X (or 3600 if
you're partial to Gamers Nexus). Most of big ticket builds seem to be opting
for a 3900X these days. That, however, is a fickle market and subject to the
kind of fandom where lines are drawn on brand loyalty that no manner of
contrary evidence will shift. As an enthusiast market, let 'em have at it. The
saltiness in comment threads can be comedy gold. The day that Intel launches a
12-core + CPU on 10mm, it'll be the belle of the ball for all of the next-gen
big ticket builds. Swings and roundabouts.

AMD are starting to make some waves in the hyperscale DC with EPYC Rome. The
real fun starts if that progress translates into your corporate workhorses
starting to opt for EPYC over Xeon in their data centres. Intel are a big
company to take down a peg, and have had little motivation to innovate any
more than needed. AMD also have a bit of a history of doing amazing things and
then dropping the ball in spectacular fashion, which is a candle that no
corporate buyer wants to be holding when they've pumped a ton of capex into a
5 year deal pinned on fleet maintenance. Intel might not be dynamic, but at
least you know what you're buying into. I think that the same translates to
those making the purchasing decisions for corporate users. For enthusiasts,
what your machine runs on is a big deal. For everyone else, as long as it
switches on and lets you get through a working day without going tits up, who
cares? And if it does, it needs to be fixed or replaced before you start
falling behind on work. That also trends toward buying patterns that focus on
known good config.

It's a fun market to watch. If I were back in my old role, I'd be looking for
EPYC solutions in my servers and attempting to wrestle a few test laptops with
Ryzen 4000 CPU's next year, even if only to worry my boss.

~~~
NicoJuicy
> The day that Intel launches a 12-core + CPU on 10mm, it'll be the belle of
> the ball for all of the next-gen big ticket builds

But it's based on Skylake. I'm not convinced that the game will shift sides
that fast.

I only mentioned the 3nm becauses it seems a big difference. But i think the
architecture is more important and that seems to be a home game for AMD right
now.

------
andy_ppp
I would bite AMDs hand off for a Mac mini equivalent with a reasonable
graphics card and AMDs latest chips in. Everything in JavaScript/Docker land
on projects of scale takes an absolute age to run all the tests on a laptop.
Fans are constantly on and throttling. It’s crazy that JS dev is this hardware
inefficient...

~~~
ianai
Uh maybe use better webdev stacks?

I know I’m arguing against 20+ years of software practice, but Moores law is
over.

~~~
andy_ppp
I'm tempted to be very very sarcastic here but instead I'll just say clearly
I'm not able to move the whole of 3000 person organisation onto a better stack
because convincing people they would get no features for 3-6 months would be
impossible.

~~~
FpUser
Cant vouch for every situation but I think in general it actually pretty much
achievable using slow attrition. From what I've observer while consulting
corporate systems are collections of hundreds components. Start with one with
clearly defined functionality and slowly chip away. Does not prevent from
delivering new features.

The only thing here is that it will introduce certain overhead and has to be
carefully managed. Also one has to make sure if it has measurable ROI.

~~~
andy_ppp
Okay, fine. I will try for Elixir, Phoenix Live View and as little JS as
possible and see how we go.

~~~
FpUser
I am not sure if Elixir is performance beast. But then again I do not know
much about it other than it runs on VM.

~~~
andy_ppp
Everyone thinks Elixir is slow until they use it and then realise throughput
is often more important than absolute performance.

~~~
FpUser
As I said I do not know much about Elixir so yo be the judge. I do however
have doubts about throughput of well designed code running on VM being faster
then that of native. My understanding is that on Erlang VM this throughput is
achieved by using async and message passing patterns backed up by thread pools
inside. All perfectly available for "low level" languages as well.

~~~
andy_ppp
I understand, my preference is Elixir but while fast enough and scalable, it
is a memory and cpu hog.

------
nickjj
The first computer I ever built had a Cyrix 133mhz CPU, and over time I tried
out Celeron and Pentium CPUs up until current day which is an i5 3.2ghz from
~6 years ago.

I still don't feel the need to upgrade but when I do this might be the first
time where I really consider an AMD CPU. Things are looking really really
solid for them lately. I could totally see using one for an all purpose
development, video editing / recording and gaming box to replace this i5
eventually.

~~~
shrewduser
If you do lots of compiling, and run the typical suite of things a developer
might (IDE, terminal apps, compilers, your backend, emulators, web browser)
these kinds of cpu's can be a large benefit.

for the average user, not as much.

~~~
m0zg
>> for the average user, not as much

Actually there are workloads on which they will heavily benefit the average
user as well. Image processing and video editing come to mind especially. You
may argue that few users do this on a PC nowadays, but that's mostly oversight
by OS developers. MS should revive Movie Maker. I used it a lot 10 years ago
when my kid was little. Apple already ships iPhotos and iMovie with every new
mac, and both of them are pretty great for what they were designed for. Then
there's also more and more 4K content on youtube by the day. My 5 year old
iMac does spin up its fans quite a lot nowadays.

I think it's also a good time to start moving some of the AI workloads to the
edge as well. It's ridiculous that we have near instantaneous on-device speech
recognition on phones now, but PCs still have to dial back home and incur
perceptible latency. I want local speech recognition out of the box in Windows
and MacOS (and ideally Linux as well), with automatic punctuation and robust
to background noise.

~~~
anonymfus
Microsoft's video editor today is Photos app. But it's a very unconventional
one: it has motion tracking but no layers.

~~~
m0zg
It also sucks pretty badly as a photo management app, especially when compared
to Apple Photos.

------
b1gtuna
I wonder how fast I can compile my code base with this. On my hexa-core
i7-8850h, it often takes more than 4 hours to build everything in full
throttle. And I do this quite often, so pain is definitely present. Given the
network and disk i/o aren't the bottleneck, having more than 5 times the cores
should theoretically reduce the build time at least by 3 folds,
conservatively?

~~~
opencl
Phoronix does compilation benchmarks (for the Linux kernel and LLVM), the
existing Ryzen chips do perform quite well on them. The i5-8400 is probably
the closest thing on the chart to your 8850h.

But there are diminishing returns to adding more cores past a certain point
which will depend on your codebase and compiler. If your builds are at 100%
CPU utilization most of the time then you will probably see pretty large
gains, but sometimes a significant chunk of the time ends up being
bottlenecked by single threaded performance.

[https://www.phoronix.com/scan.php?page=article&item=ryzen-37...](https://www.phoronix.com/scan.php?page=article&item=ryzen-3700x-3900x-linux&num=7)

~~~
urmish
Does linux compilation take a few minutes? The chart there says Ryzen 3 2200G
takes 242 seconds to compile the whole kernel. I find that difficult to
believe.

~~~
opencl
Phoronix tests compiling the upstream default config which is pretty
barebones. A normal kernel build for a desktop machine will take much longer
because there are more modules enabled.

------
Damogran6
Modern processor design is mindboggling in it's capacity. Coworker has a
modern Alenware...it was able to host a VR session, play a 1440p game and
transcode video, all at once.

~~~
MattyMc
Goodness. That really is incredible.

------
c0ffe
I would like to know if somebody has good experiences to share about AMD video
cards on Linux (with the latest driver AMDGPU).

I bought a Ryzen (Zen 2) for workstation, where I need to run a few VMs, a
local k8s cluster, run builds, some browsers tabs, and Slack. I have
everything running smoothly on top of a Linux 5 kernel, and so far, Im pleased
with the results.

But I kept an older NVIDIA card, and the drivers always had a bit of trouble
with desktop Linux support (like Wayland, plymouth bootsplash, etc).

~~~
frio
I've got an RX580; it works almost flawlessly, including for games emulated
via Proton/etc.. The only significant problem I've had was when I
(unwittingly) received a new Mesa installation (I'm on NixOS/unstable, so,
rolling release) and everything I'd already played stopped working. Took me a
while to figure out I had to delete shaders that'd been cached for the older
version of Mesa. I imagine most non-rolling distros wouldn't have that
problem.

~~~
account42
That should not happen on rolling release distros either - sounds like it
would be a NixOS packaging issue where the version (or git commit hash) of
Mesa/LLVM is missing or the Nix package applied patches without changing the
version in the cache key.

------
a012
Meanwhile, I'm still waiting for a AMD-based NUC size PC with Ryzen 2 core and
more powerful Radeon iGPU.

~~~
tecleandor
It's not a NUC, but the ASRock DESKMINI A300 is quite small and might cover
some of your needs...

~~~
michielr
The A300 is wonderful, but there aren't any Zen 2 APU's available. When the
next gen APU's with Zen 2 cores come out I'm hoping to build an A300 with a
Ryzen 5 APU and 32GB RAM. Seems like a dream dev machine for me.

------
bitL
Abandoning x399 is silly; AMD effectively cut-off entry level HEDT folks (24
core is minimum now; 8-12c is a plenty for e.g. Deep Learning researchers) and
upset previous enthusiasts that invested a lot of $ for the possibility of a
future upgrade (boards in the range of $350-$650). Given that EPYC Rome up to
32 cores can run in 1st gen EPYC boards with just BIOS update, it's difficult
to understand the decision (unless they cut the corners for TR1/TR2 boards). I
don't see any reason not to release backwards-compatible TR3 even if they had
to limit frequency/TDP on older boards...

I had plans buying 64c TR3, but I'll be skipping this and next gen and buy TR5
with DDR5 in 2021 instead.

~~~
kllrnohj
They changed the interface between the CPU & the chipset, that appears to be
the main breakage. Specifically they doubled the width of it, not just bumped
from PCI-E 3.0 to 4.0. Instead of an x4 connection it's now an x8 connection.

It does suck, but since they also seem to have dropped any of the lower-cost
SKUs it's probably not a motherboard upgrade that's going to stop you from
dropping $1400 on a CPU.

And if you really were considering a 64C one, it's hard to believe a price
difference of ~$400 will matter on what's going to likely be a ~$4000 CPU.
It's a ~10% price difference.

~~~
bitL
They could have just released x399-compatible TRs that would have been e.g.
frequency/TDP/voltage limited with just older PCIex, or make it configurable.
They didn't need to do that for new EPYCs anyway so there obviously was a way
(new EPYC socket was needed only for >32c parts).

I planned to bump my Zenith Extreme TR with 128GB ECC RAM to 32c from this gen
and use it for e.g. gaming, while investing into a TRX80/WRX80 64c TR. Now I
am actually pretty upset; I'll rather invest into a bunch of RTX 8000. They
went from something I was looking forward to in the past year to something I'd
like to forget about ASAP, like with final GoT season... I might even become
an Intel fanboy now.

~~~
kllrnohj
> They didn't need to do that for new EPYCs anyway so there obviously was a
> way (new EPYC socket was needed only for >32c parts).

Epyc has a different PCI-E layout from Threadripper and always did.

> I planned to bump my Zenith Extreme TR with 128GB ECC RAM to 32c from this
> gen and use it for e.g. gaming

I mean you still can, it just costs slightly more expensive than it otherwise
would have? And instead of selling 1 used part you now sell 2?

Like I said I agree it sucks, but you seem to be really blowing this out of
proportion. I'm far more annoyed at the missing lower-end SKUs than the
motherboard cost. Where's the update for the 12-core where the platform IO is
more valuable than raw core counts?

> I might even become an Intel fanboy now.

You're going to become a fanboy of the company that _never_ does backwards
compatibility just because you didn't get 3 generations of backwards
compatibility on 1 out of 3 platforms?

~~~
bitL
> blowing this out of proportion

The best way/time to express disappointment is to do it right away and in full
force. If AMD were on IMDB, they would get 1/10 for handling this. I have all
rights to behave emotionally instead of rationally anyway.

Used TRs sell for peanuts, the same for mobos (no demand for used stuff; look
at what actually sells instead of what is listed for months), it would be
going from $1.5k to $400, writing off like $1.1k in the process. And there
will be plenty on eBay soon, putting even higher downward pressure (both AM4
3900x and 3950x now beat all TR1/2s up to 16 core, sometimes even 24c). The
missing low-core parts is another thing that wasn't well thought out in all
this, I agree.

As for Intel, they were always upfront about the need to change mobos with
almost every new generation (the last few were exceptions); I also never had
so many issues with any Intel pro board that I had with ASUS Zenith Extreme,
their "flagship" TR mobo that can't even run 2x Titan RTX properly...

~~~
kllrnohj
> As for Intel, they were always upfront about the need to change mobos with
> almost every new generation

AMD never said TR4 was forwards-compatible. They _did_ say that for AM4 & for
Epyc SP3.

Hindsight is 20/20 yada yada but the lack of forwards-compatibility promises
should be treated as rolling the dice on that.

~~~
bitL
I think everybody expected it as people even booted EPYCs in TR4 boards.

[https://www.techpowerup.com/241072/an-epyc-threadripper-
der8...](https://www.techpowerup.com/241072/an-epyc-threadripper-der8auer-
gets-epyc-cpu-working-on-x399-motherboard)

------
pen2l
Here's a bad question from someone who doesn't really understand the
differences of CPU architectures and is therefore apprehensive about making
related decisions:

Will I be hurting myself if I buy a computer with an AMD chip, in that I might
end up in a situation where certain programs won't work for me? E.g., if I do
fancy 3d modeling (Cinema4d, fancy renderers), if I do multi-threaded
programming (in matlab), if I do physical simulations ( in COSMOL), etc.?

~~~
londons_explore
As someone who has just tried to run tensorflow and found out that for my
specific CPU I can't use prebuilt docker images and have to build my own from
scratch, yes, it's certainly a headache.

~~~
cmcd
The only time I could imagine this being the case is if there was code already
compiled for a specific architecture. In these cases you would have
compatibility issues between different intel CPUs as well so I do not think it
is a reason to avoid AMD.

------
growlist
Sadly my FX 8350 is still cranking away just fine for my needs - really
looking forward to upgrading to Ryzen when the time comes.

~~~
account42
The FX processors got a lot of flak over the core count vs number of floating
point units, but for compiling my FX 8350 has been doing just fine. Still
looking to upgrade to a TR 3970X though when the become available.

~~~
growlist
Yeah that's my point, I almost wish the thing would become obsolete more
quickly but it stubbornly refuses to :)

------
kd3
Can't wait for the 3950x. Hope launch supplies are adequate.

------
tutanchamun
Back when Ultima 9 was released around 2000 my PC had 128 MB of RAM. Now
Threadripper has the same amount of L3 cache in total (Epyc twice as much).

------
rafaelvasco
I've been away from AMD since Athlon times. Now finally I felt secure getting
an AMD processor with the Ryzen family; Ended up getting a Ryzen 5 3600X since
it was on sale. If not I would've got a normal 3600. Ended up paying less on
my new build;

------
josteink
> AMD today has lifted the covers on its next generation Threadripper
> platform, which includes Zen 2-based chiplets, a new socket

Hopefully this new socket-change is for Threadripper CPUS only.

AFAIK They already had their own one distinct from the “regular” AM4-socket.

~~~
sp332
AMD has confirmed that AM4 will be supported at least through next year.
[https://hothardware.com/news/amd-confirms-am4-socket-
support...](https://hothardware.com/news/amd-confirms-am4-socket-support-
future-ryzen-processors-2020) They didn't make any such promises about TR
sockets.

------
crb002
Who are the best PC builders to get a Ryzen 9 through with a fair amount of
RAM for a dev box?

~~~
kaibee
Is there some reason you're not interested in building it yourself?

Though to answer your question, I don't think you can go wrong with
[https://system76.com/desktops](https://system76.com/desktops)

~~~
chapium
The answer to why not diy is usually predominantly available time.

~~~
nsxwolf
With how easy it is to slip up and destroy the pins on these things, I'm
amazed how comfortable everyone seems to be with handling a $750 chip with
absolutely no recourse for damaging it besides spending another $750.

~~~
Matthias247
You really have to be unlucky or do something stupid to bend the keys. You
just have to pick up the CPU and put it back down at some place without any
pressure. If you let it fall down or try to force it to go somewhere - sure
you can kill it. But how often do you let something fall down every day? For
me it's the same probability then damaging my car while parking in a spacious
parking lot - which will be even more costly. Or dropping my laptop.

~~~
nsxwolf
I recently watched my friend attempt to remove the stock cooler from his 2200G
and it yanked clean from the socket, bending about 50 pins in the process.

Unlucky, sure. Dropping things is unlucky, and that is all it takes. People
drop things all the time.

~~~
knd775
You're supposed to twist the cooler off to break the thermal compound. Pulling
it straight off is a really bad idea.

------
karpodiem
Does this require liquid cooling?

~~~
Filligree
Not at all. I've got an air-cooled 1920X, which works just fine. Noctua makes
good fans.

The downside is, a fan capable of cooling a 200W CPU is going to be huge; mine
only has a few millimeters of clearance, and that's in an EATX case. AIO
water-cooling is easier to fit.

~~~
vbezhenar
Can motherboard break from the fan weight? Those huge coolers have solid
weight.

~~~
vel0city
These days there's usually a back bracket which helps distribute the weight of
the cooler over a large part of the board as opposed to earlier coolers which
would only mount to a few screw holes. Extra large aftermarket coolers usually
come with extra large mounting brackets. I wouldn't be too worried about any
name brand cooler causing damage to the board. Follow the directions from the
kit and it will probably be fine.

------
gigatexal
Drool. I want one! Make -j64 ;)

------
PhasmaFelis
Whenever I read "threadripper" I think of one of these and get confused.
[https://en.wikipedia.org/wiki/Seam_ripper](https://en.wikipedia.org/wiki/Seam_ripper)

