
AMD Ryzen 5 2500X and Ryzen 3 2300X CPU Review - rbanffy
https://www.anandtech.com/show/13945/the-amd-ryzen-5-2500x-and-ryzen-3-2300x-cpu-review
======
dragontamer
We're all processor geeks around here, right?

AMD Ryzen is a good architecture at a good price. But compared to Intel, there
are two important differences IMO:

1\. pext / pdep are emulated -- It takes many cycles for pext and pdep
instructions to be executed, while Intel can execute them once per clock. This
is a crazy awesome instruction for any low level programmer, and its a shame
it isn't possible to utilize it on AMD Zen processors.

2\. Zen is a bit slower with 256-bit AVX Instructions.

\----------

Bonuses:

1\. Zen offers more cores per dollar

2\. Zen offers two AES-encryption units per core. This means you can run two
AES-instructions per clock tick. Dunno why AMD does this, but its kinda cool
in some obscure cases I've coded.

~~~
eloff
pext/pdep are awesome, but I imagine you'd never notice the difference in real
world usage. You'd have to use a program often where those instructions are on
the critical path and comprise a significant percentage of execution time. The
chances of that are slim to none. You may well notice the extra cores though,
to a point, depending what you do.

~~~
dragontamer
I was writing a program similar to the 4-coloring problem. I represented
colors as 2-bits (color 0, 1, 2, and 3).

I also created a bitmask representation of relations, which would represent
1-variable in 4-bits, 2-variables in 16-bits, 3-variables in 64-bits, and
4-variables in 256 bits.

Ex: Texas / Oklahoma / Arizona relation would be a 64-bit number ("true" means
a color-set is in the relation. "False" means the color-set is not in the
relation), and extracting or packing the data into these three variables would
be a pdep or pext operation.

Extracting data (pext) would be a "select" operation. While PDEP + OR would be
a "update" operation over the relation. I've written a join for fun, but I
haven't gotten much further than that. First, because pdep / pext were slow on
my machine. Second, because I figured out an alternative solution to my
particular problem.

\---------

I think the pext / pdep instructions have HUGE implications to 4-coloring
problem, 3-SAT, Constraint Solvers, etc. etc. More researchers probably should
look into those two instructions.

Just look at Binary Decision Diagrams, and other such combinatorial data
structures, and you can definitely see the potential uses of PEXT / PDEP all
over the place.

[https://en.wikipedia.org/wiki/Binary_decision_diagram](https://en.wikipedia.org/wiki/Binary_decision_diagram)

~~~
itissid
Wouldn't this sort of thing be also very common in applying bit map masks to
bit arrays in frameworks like numpy?

~~~
dragontamer
I can't say that I've used numpy before. But this function looks similar to
pext / pdep:

[https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/...](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.packbits.html)

------
tyingq
Since it usually comes up in discussion about Ryzen, here's the deal with ECC
RAM support: [http://www.hardwarecanucks.com/forum/hardware-canucks-
review...](http://www.hardwarecanucks.com/forum/hardware-canucks-
reviews/75030-ecc-memory-amds-ryzen-deep-dive.html)

Tldr: Works for motherboards that support it, but not officially
supported/tested/etc.

~~~
pedrocr
Finding a motherboard that supports it is relatively easy. The Asus Prime X370
Pro for example seems like a good choice for a simple home server with ECC and
8 SATAs. The problem is actually finding reasonable ECC RAM.
Unregistered/Unbuffered ECC RAM is an unusual configuration that most
manufacturers don't provide. It's hard to find, expensive and much slower
which Zen is supposedly sensitive to.

Shouldn't we have moved to ECC RAM everywhere a long time ago? With economies
of scale would it actually be any more expensive or slower? There's no place
where the extra safety is a negative, is there?

~~~
kop316
ECC RAM by design is more expensive. Usually every 8 bits need a 9th parity
bits, so you need 9/8ths more RAM to support ECC. Plus you need to make the
parity calculation for everytime you go to RAM.

Whether it's needed or not...that's use case dependent.

~~~
zozbot123
It's not just 9/8 more RAM, but 9/8 more memory bandwidth on the bus. So, yes,
a pretty nasty performance hog for typical use.

~~~
p1necone
I wonder how feasible it would be to build ECC ram where you could toggle the
ECC part and just use the extra capacity if you so wished.

~~~
WrtCdEvrydy
ECC is a hardware level implementation usually.

~~~
shawnz
But why does it have to be implemented in the DIMMs? Why not in the memory
controller, such that any RAM could be used with or without parity?

~~~
agapon
ECC logic is implemented in the memory controller (in the CPU these days). The
ECC DIMM just provides extra chips to store ECC bits. And the motherboard, if
it supports ECC, provides just extra data lanes that connect the extra DIMM
chips to the appropriate pins on the CPU.

------
reilly3000
I've been running a ThreadRipper 1950X in my main box for the past 15 months
or so and am generally extremely pleased with the results. However, my biggest
takeaway from the experience of having 32 cores has been that an
embarrassingly large amount of the software I use on a daily basis for
productivity runs in a single thread. My expectation was that UI blocking
would be rare- it isn't, particularly with Chrome, Firefox, and Slack. Jira in
the browser is terrible- even with insane resources and 1Gbps bandwidth I
regularly have to wait 10-15 seconds to be able to enter text.

~~~
zozbot123
> an embarrassingly large amount of the software I use on a daily basis for
> productivity runs in a single thread. My expectation was that UI blocking
> would be rare- it isn't, particularly with Chrome, Firefox, and Slack. Jira
> in the browser is terrible- even with insane resources and 1Gbps bandwidth I
> regularly have to wait 10-15 seconds to be able to enter text.

Why would you expect anything different when JS _does_ run in a single thread?
We'll have to wait for WebAssembly to have anything like real multithreading,
with good-enough performance, on the Web.

~~~
wongarsu
Not just in a single thread, in the same thread as DOM rendering. It's
conceptually impossible to use a website in the time the javascript executes
(except for webworkers etc).

Still, javascript issues tend to cause too many hangups of other parts
(browser UI or other websites). We are in the awkward position where OS
threads are considered too heavyweight to use an entire thread for each tab,
but browsers haven't implemented a more lightweight alternative either. So
things just share threads when they really shouldn't.

~~~
pythonaut_16
Browser green threads could be amazing!

Specifically an Erlang/Elixir-ish actor async implementation

------
kop316
So an interesting thing happened to me last month. I had a gigabyte ecc pro
150 with a Xeon processor, and it died (hardware failure, it refused to POST
after having it for two years).

I run Debian Stable. When I swapped out the CPU (Ryzen T 2700X), Motherboard,
and RAM, and I powered on Debian, it booted up normally, and it automatically
configured itself to run the new CPU, motherboard, and RAM.

~~~
throwaway2048
Linux is much more flexible with booting than windows typically is, generally
as long as hardware is supported in the kernel (which is virtually always)
there wont be issues.

Even windows has gotten better about this, its usually possible to image
drives and boot them in a VM without everything exploding.

~~~
AnIdiotOnTheNet
As of Windows 10 it is reasonable to expect to be able to rip a drive out of
any given computer and put it in another one and have it work. Actually did
this recently to jump several years in hardware on my workstation.

~~~
tracker1
If you do that more than once in a 90 day window, you will likely have
activation issues.

~~~
AnIdiotOnTheNet
Not in a corporate environment with KMS though.

------
davidy123
I really wanted to go AMD for my current build, but for full-stack Javascript
development, or most things involving a single thread, Intel is often
significantly faster. I would have gone with AMD anyway to support
competition, but the requirement to add a graphics card for all their
performance CPUs, which I don't want until I actually need one, always tilted
things toward Intel.

~~~
mrweasel
For development box, is the single thread performance on the AMD system really
something you'd notice? For a production system you'd ideally pick the CPU
architecture that's best suited for your work load, but most of us just go
with "what ever is currently under our hypervisor".

In my mind you're doing something incredibly specialized if you notice the
difference between AMD and Intel, or between current generation and last
generation CPUs. Video encoding is really the only "mainstream" application I
can think of.

For Javascript development I doubt you would notice the difference, unless
it's something highly specialised.

~~~
noir_lord
It’s not imo.

I mean theoretically my 2700X has slight worse performance per core (though
not at the same price point, not fair to compare a 350 processor with a 600+)
but it doesn’t matter when I have webpack running with 4 threads, type
checking on a separate thread, a DB server and IntelliJ all running with not
remotely a stutter.

~~~
arvinsim
Yes, I would venture that most developers would benefit more from more cores
than raw CPU performance.

------
xcircle
Im looking for an Upgrade for my PC at the moment and can't decide between
ryzen 5 2600(x) and 7 2700(x).

Now if i know that AMD is Planning to release the zen2 in May(?) I think I
have to buy the 5 2600 because of the price fall at release.

I also need any Tips for a good Mainboard :)

~~~
tormeh
Gaming: 2600X

Workstation: 2700

Only a select few games are able to use more than 6 cores, and then only in
some situations. For compilation and other workstation tasks the 8-cores (and
more) are king, but they're expensive, so unless you have money to blow, go
for the 2700.

~~~
xcircle
Yeah it will be for gaming ;)

And what is with the diff between 2600 and non x? I think I need an
aftermarket cooler for both variants .. and the 2600 (non x) are cheaper in
Power consumption and the clock losses are acceptable, or not?

And a b450 Mainboard will also be good enough? Im using a Nvidia 1060 6gb
graphic Card.

~~~
Narishma
I don't think you need an aftermaker cooler unless you plan to overclock the
CPU.

------
ksec
Unfortunately being on Mac platform means I may never get a taste of AMD's Zen
CPU. I think Intel knew opening up thunderbolt could spell the end of Apple
Intel relationship.

~~~
Koshkin
> _means_

Sticking to a closed platform means many things, including, for instance, that
you 'may never get a taste of' building your own box...

------
rb808
I have a 6yo i5, I'm about to upgrade. Its interesting the the main reasons
are NVMe and graphics for a 4k monitor, the extra CPU speed is just a bonus.

~~~
loser777
I am still on a 4.3GHz 3570K at home (holding up pretty well after some minor
mechanical/percussive maintenance revived a dead memory channel caused by a
flaky CPU socket). I'm eyeing 3rd Gen Ryzen later this year but for now
upgrading from 16GB DDR3 to 16GB DDR4 doesn't seem cheap ;).

~~~
LUmBULtERA
DDR4 RAM prices have fallen a lot lately. There are sales for 2x8GB sets for
around $90 now, with 32 GB sets coming in under $200 as well.

------
holtalanm
definitely going ryzen for my next desktop CPU.

~~~
jchw
Can recommend. I have a Ryzen 7 2700X and it mops the floor against all my
other builds (which are admittedly all older builds.) Runs Linux great and
IOMMU works very well - it seems they now officially support it, so GPU pass
through has worked super well and saves me from needing to dual boot.

Another bonus: the stock CPU fan, although flashy with it's RGB LEDs, is very
formidable and can probably even stand up to a bit of overclocking.

I am glad to see competition in the CPU space again. It's been too long.

~~~
kop316
What are you using for your virtualization?

I may have to try that when I get back. I have a Ryzen 5 2700x and an AMD Fury
9x.

~~~
jchw
I'm passing through a spare GTX 1070 using good ol' qemu kvm.

It's extremely easy to set up the PCI passthrough itself in virt manager, the
system-level configuration is a bit more involved. You may also want a KVMFR
like Looking Glass, since otherwise you'll need a physically separate
keyboard/mouse/video setup.

I'm using Nix, so my system configuration is easy to summarize:
[https://gist.github.com/jchv/b0e4b39679e450536a17cc6a5d69169...](https://gist.github.com/jchv/b0e4b39679e450536a17cc6a5d69169a)

(On that note, I can definitely recommend NixOS, it's hard to even describe
how helpful it's been in making my configuration understandable and
reproducible.)

There are plenty of guides as well. Here's one for NixOS, but undoubtedly you
can find more.

[https://forum.level1techs.com/t/nixos-vfio-pcie-
passthrough/...](https://forum.level1techs.com/t/nixos-vfio-pcie-
passthrough/130916)

I don't think most commercial VM solutions support this kind of configuration,
I'd guess Vbox might but I know for a fact VMware Workstation doesn't. (and
there's no VMware Workstation package for NixOS yet, so my license is
collecting dust at the moment :()

It's worth noting you need a separate GPU for this right now. Intel just
recently started supporting something called GVT-G that lets you split an
Intel IGP into multiple VMs, not as useful for me since I want a better GPU
but maybe useful to others. I have yet to try it.

~~~
dave7
Can you specify which motherboard please? And I guess it's a pair of Nvidia
cards fitted?

I'd like a setup like this in my future!

~~~
jchw
Sure. I believe the motherboard is an ASUS X470 Prime Pro. I picked it up at
Fry's and I'm not home to look at what it is so I could be a little off.

It is indeed a pair of NVidia cards, but that part only matters a little bit.
I don't really highly recommend NVidia for the host and as far as I know you
can run whatever card you want on the Linux host. Looking Glass may care about
the guest GPU simply because it's still a bit experimental, but there's not
really any reason I'm aware of it can't work with AMD or Intel graphics
processors.

~~~
westmeal
Do the GPUs have to be the same architecture/model?

~~~
jchw
No, there's no reason I'm aware of that they would have to be similar in any
way.

