Hacker News new | comments | ask | show | jobs | submit login
AMD Ryzen 5 2500X and Ryzen 3 2300X CPU Review (anandtech.com)
114 points by rbanffy 6 days ago | hide | past | web | favorite | 120 comments

We're all processor geeks around here, right?

AMD Ryzen is a good architecture at a good price. But compared to Intel, there are two important differences IMO:

1. pext / pdep are emulated -- It takes many cycles for pext and pdep instructions to be executed, while Intel can execute them once per clock. This is a crazy awesome instruction for any low level programmer, and its a shame it isn't possible to utilize it on AMD Zen processors.

2. Zen is a bit slower with 256-bit AVX Instructions.



1. Zen offers more cores per dollar

2. Zen offers two AES-encryption units per core. This means you can run two AES-instructions per clock tick. Dunno why AMD does this, but its kinda cool in some obscure cases I've coded.

Zen 2 is adding 256 bit registers for 256 bit AVX instead of doing it in 2 passes with 128 bit registers.

If AMD can release a 16 core native AVX threadripper without thermal throttling that would be awesome. The same could be said with Intel and AVX-512. Hopefully this year we will see one or the other.

I hope AVX-512 is studied for its lessons on how not to do an ISA. I have very few technical complaints about it, but the following were a deal breaker

1. Limiting it to a subset of chips, and initially not releasing it for client chips at all. Creating ISAs for a small part of the market is the best way to ensure it does not see any use.

2. Reasoning about AVX512 performance is ridiculously difficult for most workloads because of the clock penalty. Unless you have cases where you are all AVX512 all the time, you will likely see performance drops.

I suspect the dual AES units are a side effect of Zen having two 128-bit SIMD units. The AES instructions use the SIMD registers so naturally the AES implementation is integrated into the SIMD unit. Presumably it was easier to duplicate the AES engine along with the rest of the SIMD unit than to split it out.

pext/pdep are awesome, but I imagine you'd never notice the difference in real world usage. You'd have to use a program often where those instructions are on the critical path and comprise a significant percentage of execution time. The chances of that are slim to none. You may well notice the extra cores though, to a point, depending what you do.

I was writing a program similar to the 4-coloring problem. I represented colors as 2-bits (color 0, 1, 2, and 3).

I also created a bitmask representation of relations, which would represent 1-variable in 4-bits, 2-variables in 16-bits, 3-variables in 64-bits, and 4-variables in 256 bits.

Ex: Texas / Oklahoma / Arizona relation would be a 64-bit number ("true" means a color-set is in the relation. "False" means the color-set is not in the relation), and extracting or packing the data into these three variables would be a pdep or pext operation.

Extracting data (pext) would be a "select" operation. While PDEP + OR would be a "update" operation over the relation. I've written a join for fun, but I haven't gotten much further than that. First, because pdep / pext were slow on my machine. Second, because I figured out an alternative solution to my particular problem.


I think the pext / pdep instructions have HUGE implications to 4-coloring problem, 3-SAT, Constraint Solvers, etc. etc. More researchers probably should look into those two instructions.

Just look at Binary Decision Diagrams, and other such combinatorial data structures, and you can definitely see the potential uses of PEXT / PDEP all over the place.


Wouldn't this sort of thing be also very common in applying bit map masks to bit arrays in frameworks like numpy?

I can't say that I've used numpy before. But this function looks similar to pext / pdep:


Hobby code: I slowly developing a Gin rummy [1] engine. Card sets are represented by 64 bit integers and all operations (findig melds, sets, etc...) are implemented with bitwise operators.

I have used pext/pdep for the iterator implementation (iterate over all subset of a card set, or all combination of n cards).

(e.g. To iterate over all 10 card combination brute-force and filter and print out all the ones which can be knocked (evaluating 15_820_024_220 hands) takes 70 seconds - one threaded - on my 7th gen intel i3.)

[1] https://en.wikipedia.org/wiki/Gin_rummy

A while back, I was working on a fun little video game side project which used BMI2 instructions to compute Morton codes on the critical path.

Voxels were stored in a buffer sorted in Morton order. The idea was to balance performance improvements realized by increased spatial locality against the cost of computing the Morton codes. The trade-off was only worthwhile on Intel because of the use of pdep/pext in optimized encode/decode functions.

I imagine something similar would probably apply to texture lookups in a software 3D renderer.

> I imagine something similar would probably apply to texture lookups in a software 3D renderer.

Except that for a 3D rasterizer you'd probably be better off calculating Morton code for 8 pixels at once in a SIMD register and then using vpgatherdd to fetch 8 ARGB pixel values "in parallel" (in theory, in practice AVX2 gather might not be any faster than scalar loads).


I will agree with dragon tamer that for some real-world codes PEXT/PDEP sits right in the middle of the hot path. It isn't just the clock cycle savings either, for some logic it can substantially simplify the code path. There's a lot of neat wizardry that can be done by composing sequences of those two instructions (mixed with other basic integer ops).

I don't use them often, but there are cases where I would not want to try to write code without them.

Do any compilers use the pext/pdep instruction?

PEXT / PDEP are not instructions that would compile automatically in anything I'm aware of.

Its a new fundamental bitwise operator. Some other programmers have called it a "bitwise gather (pext) or bitwise scatter (pdep)" (EDIT: Had it backwards the first time). Its a very powerful way to think about bits that Intel just invented in those instructions.

If you have any data-structure that is bitwise, I can almost guarantee you that PEXT or PDEP will be useful in some operation. These instructions have been used to calculate bishop / rook moves in less than five operations.


And yes, remember that bishops and rooks can be blocked by other pieces. So given all the pieces on a chessboard, and the location of the bishop in question, calculate all possible locations the bishop can move (after accounting for "being blocked").

Don't omptimizers already do quite a lot of analysis to output what you mean rather than what you say? I can think of quite a lot of bitwise shift/and and shift/or patterns that should be automatically convertible to single-instruction pext/pdep's

Not a processor geek per se, but do appreciate some insights into a bit of these details. Aside, I'm really happy to see AMD being competitive across most products and even better bang for the buck at a lot of price points. Waiting on Zen 2 architecture to upgrade my desktop (will be over 5yo at that point)... depending on initial release or Threadripper version.

I can't even figure out what those instructions do - there is no way they get heavy use in common software any time soon.

Since it usually comes up in discussion about Ryzen, here's the deal with ECC RAM support: http://www.hardwarecanucks.com/forum/hardware-canucks-review...

Tldr: Works for motherboards that support it, but not officially supported/tested/etc.

Finding a motherboard that supports it is relatively easy. The Asus Prime X370 Pro for example seems like a good choice for a simple home server with ECC and 8 SATAs. The problem is actually finding reasonable ECC RAM. Unregistered/Unbuffered ECC RAM is an unusual configuration that most manufacturers don't provide. It's hard to find, expensive and much slower which Zen is supposedly sensitive to.

Shouldn't we have moved to ECC RAM everywhere a long time ago? With economies of scale would it actually be any more expensive or slower? There's no place where the extra safety is a negative, is there?

ECC RAM not being in consumer PCs is largely a market segmentation pushed by Intel[1]. It is in fact quite ridiculous if you consider that essentially every other bus, interconnect and storage in your computer has error correction except _main_ memory (and the main memory bus). If it weren't so normalized we'd go like "Dude, do you even realize how lol it is to have no parity on all of the most important data the computer is working with?!".

Also the lost productivity due to main memory errors not being detected probably easily goes into the billions. Thanks, Intel.

[1] There was a time when consumer systems genuinely didn't support ECC for lack of hardware support. This hasn't been the case for many, many years.

Intel: Fuck You, Pay Me.

I don't believe any CPU cache has ECC, either. Which is where the memory you're actually working with lives.

Also no other interconnect in your system is as much of a bottleneck as the one to main memory is. It's worth keeping that in mind before entirely blaming this on "market segmentation". ECC RAM does actually slow down the part of a system that is already the bottleneck in most common situations.

> I don't believe any CPU cache has ECC, either. Which is where the memory you're actually working with lives.

I'm not aware of any desktop CPU that doesn't have ECC caches. CPU internal busses use ECC and external interconnects (e.g. PCIe, DMI, QPI/UPI) use it as well.

> ECC RAM does actually slow down the part of a system that is already the bottleneck in most common situations.

ECC invariably introduces some additional latency in the memory controller, but I don't see a persuasive argument why it would reduce throughput. It would surprise me if this additional latency is measurable, given that the ECC logic is already in the core and in the data path anyway, and the system configuration (AMD, Intel) / CPU fuses (Intel) only decide whether it is active.

That being said buffered ECC modules are usually not the fastest. I don't think that this is due to any technical limitation per se, but rather market demand (cost, perf per Watt).

> I'm not aware of any desktop CPU that doesn't have ECC caches. CPU internal busses use ECC and external interconnects (e.g. PCIe, DMI, QPI/UPI) use it as well.

I believe all of those are just parity checked and not ECC?

> ECC invariably introduces some additional latency in the memory controller

Latency is a non-trivial factor here, too, though.

> That being said buffered ECC modules are usually not the fastest. I don't think that this is due to any technical limitation per se, but rather market demand (cost, perf per Watt).

Poking around it looks like ECC RAM tops out at DDR4 2666 @ 1.2v. By comparison there's no shortage of DDR4 3200+ options at 1.2v. Whether or not this is purely market demand or not, it doesn't seem like there's a purely power reason for it.

But you also can't solely blame Intel for a lack of market demand. Even when the choice is there nobody seems to be making ECC memory for high-end desktop usages. Where's the DDR4 3200 for Threadripper or Xeon-W workstations, for example? They surely benefit from the improved bandwidth, or else they wouldn't have triple & quad channel memory. And they'd surely pay the price of admission, because we're talking $3,000+ entry points for builds.

Caches are definitely ECC, because the corresponding MCEs can tell the system both that an error was detected and corrected, and that an uncorrectable error occured (which by default leads to a kernel panic iirc).

Actually, all modern CPUs have ECC protected caches and they have MCA events to report any problems with them too.

Are you confusing parity with ECC?


I haven't bought new RAM lately, but not that long ago it was often much cheaper to buy an old server and outfit it with used ECC DDR3 than to buy equivalent consumer RAM, simply because there wasn't much demand for previous-generation ECC RAM.

Right now DDR4 is fairly new, but as old servers get rotated out I expect a good market for cheap ECC DDR4 sticks that come from used servers but are too small to get reused in new servers. (unregistered/unbuffered is still a problem though)

The main advantage of ECC is reliability so buying used RAM doesn't seem like a great option. If you're willing to buy used a good way to get a nice workstation is to just buy a used workstation machine (e.g., HP Z400/Z600/Z800 line) and just upgrade the storage and GPU. But if you're gaming or trying to upgrade a home server what you want instead is a nice Motherboard/CPU/RAM combination. Right now Ryzen would be a great option for that if there were some good ECC UDIMM options.

Server RAM would be RDIMMs or LRDIMMs while consumer processors mostly can support only UDIMMs.

ECC RAM by design is more expensive. Usually every 8 bits need a 9th parity bits, so you need 9/8ths more RAM to support ECC. Plus you need to make the parity calculation for everytime you go to RAM.

Whether it's needed or not...that's use case dependent.

In other words, a 32GB ECC memory stick has to include an extra 4GB for parity.

That's actually a price I would be willing to pay.

So you will need to pay 360$ instead of 320$. Many people would choose cheaper memory. I guess, almost everyone except some PC enthusiasts (last time I discussed it, the majority of PC enthusiasts thought that ECC on desktop is not needed, so they wouldn't want to pay for it). I agree that ECC is nice to have, but price is real.

RAM prices already vary by more than 12%, and it seems like there is no shortage of people paying premium for brand recognition, different board colors (green is "out"), useless heat sinks etc. I think there would be plenty of people willing to pay extra for ECC (some for peace of mind, some because they need it, some just to feel superior)

The market segment of "people who are willing to pay a premium for computing devices" is pretty vast, isn't it?

Most people would be fine with a $200 Chromebook, $500 bare-bones Windows notebook, or a garden-variety PC from 2007 but millions of us choose to pay more because we value the additional things that newer, more powerful computing devices give us.

Lots of professionals and enthusiasts gladly pay large premiums for higher-spec gear, even when the improvements are quite small, because those small improvements are enjoyed over many thousands of hours of lifetime use.

Perhaps more to the point, you already see gamers paying premiums for higher-specced memory to enable their overclocking and tweaking endeavors.

So I definitely think there's a market of people who'd pay more for ECC...

I guess they will pay for higher frequency, but for ECC? Not so much. It won't help to achieve higher geek rating after all. I might be totally wrong about it, of course.

Just like I paid extra for a nice PSU instead of the very cheapest, I would pay more for ECC.

The cost difference for ECC amortized over the life of the hardware is negligible compared to the annoyance and time spent trying to work out what's causing those random bluescreens/reboots/corruption.

I know ECC is more complex, my hypothesis was that in volume the difference would be small enough to not care and that we're only stuck with non-ECC RAM because of path dependence.

>Whether it's needed or not...that's use case dependent.

Even for gaming, having your game crash/misbehave because of bit flips is at the very least annoying. But maybe this is uncommon enough that it doesn't make enough difference?

> Even for gaming, having your game crash/misbehave because of bit flips is at the very least annoying. But maybe this is uncommon enough that it doesn't make enough difference?

My gaming system with non-ecc memory will run memtest all day long never spotting a single error.

I know these errors can & do happen, but I'm fairly certain the number of crashes I've experienced that would have been prevented using ECC memory is single digits at most.

But ECC being in general slower than non-ECC? Well that's very noticeable.

ECC is useless until it is not. Just like insurance, backups, etc.

The failure risk of ECC is significantly lower than a lack of insurance of backups, though.

Drive fails and no backups? Potentially terabytes of data vanishes. House burns down and no insurance? Hundreds of thousands of dollars to repair. No ECC and a bit flips? Nothing happens, program crashes, or maybe a single file gets corrupted in an unrecoverable way.

And maybe that file happens to be an important encryption key. Or maybe a whole filesystem gets corrupted if some important metadata is. Or maybe your system develops a hardware problem over time, like an oxidized CPU pin or a contact in a memory slot and then you are getting flaky bits on a regular basis. ECC is useless until you experience a problem of those kinds. And many people will never do. But some will. I have.

Gaming systems are often overclocked and aren't really stable often enough in the first place.

It's not just 9/8 more RAM, but 9/8 more memory bandwidth on the bus. So, yes, a pretty nasty performance hog for typical use.

I wonder how feasible it would be to build ECC ram where you could toggle the ECC part and just use the extra capacity if you so wished.

ECC is a hardware level implementation usually.

But why does it have to be implemented in the DIMMs? Why not in the memory controller, such that any RAM could be used with or without parity?

ECC logic is implemented in the memory controller (in the CPU these days). The ECC DIMM just provides extra chips to store ECC bits. And the motherboard, if it supports ECC, provides just extra data lanes that connect the extra DIMM chips to the appropriate pins on the CPU.

If you do that, you're moving the cost on to the motherboards and increasing the cost for something most consumers won't get any advantage out of.

Is your typical performance limited by memory bandwidth? Or rather by latency?

Once again...that's use case. But ECC will affect both. You have 9/8 more bits you have to get from RAM, then you have to doo the ECC calculation for each byte

The bus is also 8 bits wider, so it’s a wash.

Pretty sure my next PC build is going to be Zen 2 (3xxx series) and I'm going to try and get ECC memory for it. Even if it costs a little more/a little slower I think the knowledge that my data hasn't been corrupted is worth it.

Same. I've been through two computers that have randomly been flaky. It turned out, after 2 years of debugging, that the PSU didn't really like suddenly having load on it (like when the CPU turbo-boosts 8 cores) and that that was leading to crashes. (It was a 1000W PSU, too; I thought overprovisioning would solve all my problems, but I guess not.) I suspected the memory the whole time, though, and having ECC would have at least been one less thing to worry about.

The other thing that annoys me about PC hardware is that the motherboard tries as hard as possible to make your system unstable. I don't want overclocking. I don't want to run the memory at XMP speeds. Just give me a button for "run everything at its conservative spec". (With that in mind, I'm not sure memory manufacturers test anything other than their XMP timings, leaving you to guess whether the non-XMP profile has the right voltage/latency numbers. It's infuriating!)

> It's hard to find,

This wasn't my experience either on memory.net or crucial.com

> expensive

Yes it is, much more than the 1/8th more RAM would require.

> much slower

to quantify, the fastest currently is DDR4-2666 while non-ECC goes to the crazy DDR4-4700.

>This wasn't my experience either on memory.net or crucial.com

As far as I can tell crucial.com shows a total of 2 options. Both 16GB UDIMMs. One tall, one short. I think last time I checked it had none. I didn't see a single option on memory.net. Remember that you need Unbuffered/Unregistered ECC (UDIMMs with ECC) and there aren't many options of those. RDIMMs will not work.

It's also hard to verify the DIMMs will work because the motherboard's QVL doesn't list any of the ones I've found so far. Not all retailers will have these either, so that seems hard enough to find for me.

People have mentioned better luck buying Samsung-branded ECC UDIMMs. I heard, but couldn't verify, some of their modules are even on the latest QVLs.

> It's hard to find, expensive and much slower which Zen is supposedly sensitive to.

Hard to find -- perhaps. Although these days, with internet and so many sellers, even hard to find things are quite easy to find.

Expensive -- just a little bit more than non-ECC RAM. Probably because of the couple more chips ECC RAM uses.

Much slower -- slower for sure, but again, I think it's not much slower.

Kingston, Samsung, Crucial / Micron all sell UDIMMs.

There are only a few modules of 2666Mhz DDR4 ECC RAM by any of those manufacturers. Non-ECC goes to almost 2x that speed and there is plenty of choice. In the places I've seen it's around 40-50% more expensive for 12.5% more RAM chips. These are not small differences.

As someone with a Ryzen 1xxx, X370-based motherboard, and 32GB of ECC RAM the link you give is a bit out of date in that Windows 10 better supports ECC (the same X370 with `wmic memphysical get memoryerrorcorrection` reports 6 now). Much of the rest of the article about a wide selection of memory, finding the motherboard firmware toggles, etc are still valid.

There are also posts at other fora complaining that Hardware Canucks is wrong to suggest that an uncorrectable error should result in an immediate system halt - I leave that argument to those who are interested.

I've been running a ThreadRipper 1950X in my main box for the past 15 months or so and am generally extremely pleased with the results. However, my biggest takeaway from the experience of having 32 cores has been that an embarrassingly large amount of the software I use on a daily basis for productivity runs in a single thread. My expectation was that UI blocking would be rare- it isn't, particularly with Chrome, Firefox, and Slack. Jira in the browser is terrible- even with insane resources and 1Gbps bandwidth I regularly have to wait 10-15 seconds to be able to enter text.

That's why I went with a i7-8700k for my desktop which was at the time the fastest single thread performance (except i7-8086k which was IMO too much money for a tiny bit faster).

It has 6 physical cores (12 virtual) which I think is enough for almost all workloads. No complaints.

They've now been superseded by the i9-9700k and i9-9900k which each have 8 physical cores and are slightly faster per single threads.


Software is nowhere near making use of all those threads under most circumstances IMO.

Cue all the people saying how they couldn't live without their 32 threads...

> Cue all the people saying how they couldn't live without their 32 threads

I can live without 24 threads, but I love not having my computer become unusable because I'm encoding video or doing some other CPU-heavy task. having more than 4 threads has opened a whole new world of thinking of how to parallelize common tasks - not ever having to wait for your computer feels like a super-power. Paradoxically, this has freed me up to use an ARM chromebook for day-to-day usage - when I need firepower, I remote into the TR workstation (smartplug + boot on power BIOS + dynamic DNS)

I'll never stop feeling a little exhilaration from typing "make -j 22"

UI lock-ups are because of thread locks, not because you don't have enough cores. Actually I could load all my 4 cores at 100% with some task and PC would be very responsive, it's really hard to notice a difference in most tasks. So yeah, 32 cores are nice when you have work for those cores, but it's not magic. Frequency is magic :)

I have a 8/16 Ryzen with 64GB of memory and most of it sat idle for me as well.

I ended up installing ESXi on the machine, and passing through my video card, usb and nvme to the windows vm. It works great. I then use the spare compute for running vSphere

> an embarrassingly large amount of the software I use on a daily basis for productivity runs in a single thread. My expectation was that UI blocking would be rare- it isn't, particularly with Chrome, Firefox, and Slack. Jira in the browser is terrible- even with insane resources and 1Gbps bandwidth I regularly have to wait 10-15 seconds to be able to enter text.

Why would you expect anything different when JS _does_ run in a single thread? We'll have to wait for WebAssembly to have anything like real multithreading, with good-enough performance, on the Web.

> Why would you expect anything different when JS _does_ run in a single thread?

No reason for site A rendering to block site B; no reason for either to block the main UI. No reason for an issue tracker to take 10 seconds to achieve interactivity on LAN (heck, I'd consider 0.5 seconds slow).

Not just in a single thread, in the same thread as DOM rendering. It's conceptually impossible to use a website in the time the javascript executes (except for webworkers etc).

Still, javascript issues tend to cause too many hangups of other parts (browser UI or other websites). We are in the awkward position where OS threads are considered too heavyweight to use an entire thread for each tab, but browsers haven't implemented a more lightweight alternative either. So things just share threads when they really shouldn't.

Browser green threads could be amazing!

Specifically an Erlang/Elixir-ish actor async implementation

layout and rendering can happen asynchronously in background threads, but you have to carefully structure your JS code to not read back layout properties any time soon after modifying the dom, otherwise it will block and turn everything back into sequential execution

So an interesting thing happened to me last month. I had a gigabyte ecc pro 150 with a Xeon processor, and it died (hardware failure, it refused to POST after having it for two years).

I run Debian Stable. When I swapped out the CPU (Ryzen T 2700X), Motherboard, and RAM, and I powered on Debian, it booted up normally, and it automatically configured itself to run the new CPU, motherboard, and RAM.

That's pretty normal for Linux. If the kernel supports the hardware, it'll load the drivers necessary for the hardware that it detects. Swapping CPU's and chipsets usually isn't a problem for Linux.

That makes sense, I have just never run into that particular edge case before.

Since at least windows 7 it even works on windows, although you need to do some chipset cleanup by hand after reboot before installing the new ones (if you need them). I wouldn't try that experience on previous versions on windows though.

That’s true to a certain extent. Once in a while a system won’t like a drastic change like going from an AMD chipset to Intel and you may get a bluescreen on reboot.

And you need to reactivate windows if you've swapped the motherboard / cpu.

That's kind of expected on Linux. As long as it's the same architecture you should be fine. I even assume you can get away with i386 or i686 on amd64 boxes (although you won't enjoy it).

Linux is much more flexible with booting than windows typically is, generally as long as hardware is supported in the kernel (which is virtually always) there wont be issues.

Even windows has gotten better about this, its usually possible to image drives and boot them in a VM without everything exploding.

As of Windows 10 it is reasonable to expect to be able to rip a drive out of any given computer and put it in another one and have it work. Actually did this recently to jump several years in hardware on my workstation.

If you do that more than once in a 90 day window, you will likely have activation issues.

Not in a corporate environment with KMS though.

I migrated my desktop off of AMD to Intel to AMD over 6+ years, without reinstalling Windows. It just works.

I really wanted to go AMD for my current build, but for full-stack Javascript development, or most things involving a single thread, Intel is often significantly faster. I would have gone with AMD anyway to support competition, but the requirement to add a graphics card for all their performance CPUs, which I don't want until I actually need one, always tilted things toward Intel.

For development box, is the single thread performance on the AMD system really something you'd notice? For a production system you'd ideally pick the CPU architecture that's best suited for your work load, but most of us just go with "what ever is currently under our hypervisor".

In my mind you're doing something incredibly specialized if you notice the difference between AMD and Intel, or between current generation and last generation CPUs. Video encoding is really the only "mainstream" application I can think of.

For Javascript development I doubt you would notice the difference, unless it's something highly specialised.

It’s not imo.

I mean theoretically my 2700X has slight worse performance per core (though not at the same price point, not fair to compare a 350 processor with a 600+) but it doesn’t matter when I have webpack running with 4 threads, type checking on a separate thread, a DB server and IntelliJ all running with not remotely a stutter.

Yes, I would venture that most developers would benefit more from more cores than raw CPU performance.

Just a blanket reply to replies; I wish I could find more developer specific benchmarks, the best I can find are around Javascript performance. There the GHz-comparable Intel chips are around 20% faster in common tasks than AMD. That is perceptually faster, and significantly faster in activities like builds. I've read a lot of discussions showing that many multi thread tasks have a single core element so it's ultimately faster unless the work really scales across cores.

More cores are important, but the i7-8700 uses 65W, a single thread is faster than AMD and it has six cores / 12 threads. To get close performance but with 8 cores / 16 threads I'd have to get the 2700X, which is more expensive, plus a video card, which is much more wattage and in order to not get a throwaway video card much more expensive. There are also benchmarks that show AM4 has storage performance issues between 10 - 30%, which could affect complex workflows involving builds and containers.

Still, I'm within the return period and trying to find a way to justify AMD, since my work is increasingly about containers. It's just such a huge timesink.

I think current AMD Zen offerings are faster single thread clock-for-clock for scalar code than anything Intel has to offer.

OTOH, AVX2 peak rate is lower for Zen than recent Intel offerings.

Javascript is very likely to be scalar bound, so I think you might currently find the best performance from AMD camp.

Im looking for an Upgrade for my PC at the moment and can't decide between ryzen 5 2600(x) and 7 2700(x).

Now if i know that AMD is Planning to release the zen2 in May(?) I think I have to buy the 5 2600 because of the price fall at release.

I also need any Tips for a good Mainboard :)

Gaming: 2600X

Workstation: 2700

Only a select few games are able to use more than 6 cores, and then only in some situations. For compilation and other workstation tasks the 8-cores (and more) are king, but they're expensive, so unless you have money to blow, go for the 2700.

> Only a select few games are able to use more than 6 cores

Poorly designed older games. Modern games should be able to use all cores, because Vulkan is available. Something like dxvk for example is using as many cores as reasonable for compiling Vulkan pipelines.

See its config Wiki (dxvk.numCompilerThreads parameter) : https://github.com/doitsujin/dxvk/wiki/Configuration

So more cores can help for gaming, no doubt. Just depends on the use case.

> Only a select few games are able to use more than 6 cores...

Funny, its not many years ago people said games can use just 1 core, then 2 cores, 4... and now 6. Isn't it likely this progression will continue?

Work-stealing queue design in games can use as many cores as you can throw at it (at least until memory bottleneck).

Yeah it will be for gaming ;)

And what is with the diff between 2600 and non x? I think I need an aftermarket cooler for both variants .. and the 2600 (non x) are cheaper in Power consumption and the clock losses are acceptable, or not?

And a b450 Mainboard will also be good enough? Im using a Nvidia 1060 6gb graphic Card.

I don't think you need an aftermaker cooler unless you plan to overclock the CPU.

I have the 2700X and love it unless AMD release a 16/32 zen 2 at a decent price I’ll be keeping this until post AM4 so at least 2021.

It’s an excellent machine as is the 1700X I have at work.

Honestly day to day for dev I can’t tell the difference between the 1700X and the 2700X.

Unfortunately being on Mac platform means I may never get a taste of AMD's Zen CPU. I think Intel knew opening up thunderbolt could spell the end of Apple Intel relationship.

isnt that a big part of the value added by apple. You dont have to care about cpus, motherboards, etc...

If you want to buy a decent machine and not spend time finding out what is decent right now, get apple. If you want control and a perfect tuning for your particular situation, definetly do not get apple.

> means

Sticking to a closed platform means many things, including, for instance, that you 'may never get a taste of' building your own box...

You can, just don't expect official hardware. https://amd-osx.com

I have a 6yo i5, I'm about to upgrade. Its interesting the the main reasons are NVMe and graphics for a 4k monitor, the extra CPU speed is just a bonus.

You probably do not need to upgrade then. You can use NVMe over PCIe and just swap out the graphics card.

I am still on a 4.3GHz 3570K at home (holding up pretty well after some minor mechanical/percussive maintenance revived a dead memory channel caused by a flaky CPU socket). I'm eyeing 3rd Gen Ryzen later this year but for now upgrading from 16GB DDR3 to 16GB DDR4 doesn't seem cheap ;).

DDR4 RAM prices have fallen a lot lately. There are sales for 2x8GB sets for around $90 now, with 32 GB sets coming in under $200 as well.

> I am still on a 4.3GHz

I wonder what the benefit of an upgrade would be.

definitely going ryzen for my next desktop CPU.

Can recommend. I have a Ryzen 7 2700X and it mops the floor against all my other builds (which are admittedly all older builds.) Runs Linux great and IOMMU works very well - it seems they now officially support it, so GPU pass through has worked super well and saves me from needing to dual boot.

Another bonus: the stock CPU fan, although flashy with it's RGB LEDs, is very formidable and can probably even stand up to a bit of overclocking.

I am glad to see competition in the CPU space again. It's been too long.

What are you using for your virtualization?

I may have to try that when I get back. I have a Ryzen 5 2700x and an AMD Fury 9x.

I'm passing through a spare GTX 1070 using good ol' qemu kvm.

It's extremely easy to set up the PCI passthrough itself in virt manager, the system-level configuration is a bit more involved. You may also want a KVMFR like Looking Glass, since otherwise you'll need a physically separate keyboard/mouse/video setup.

I'm using Nix, so my system configuration is easy to summarize: https://gist.github.com/jchv/b0e4b39679e450536a17cc6a5d69169...

(On that note, I can definitely recommend NixOS, it's hard to even describe how helpful it's been in making my configuration understandable and reproducible.)

There are plenty of guides as well. Here's one for NixOS, but undoubtedly you can find more.


I don't think most commercial VM solutions support this kind of configuration, I'd guess Vbox might but I know for a fact VMware Workstation doesn't. (and there's no VMware Workstation package for NixOS yet, so my license is collecting dust at the moment :()

It's worth noting you need a separate GPU for this right now. Intel just recently started supporting something called GVT-G that lets you split an Intel IGP into multiple VMs, not as useful for me since I want a better GPU but maybe useful to others. I have yet to try it.

Thanks for the GVT-G reference. For running a Windows VM in a laptop that seems like the only missing piece. Graphical performance is clearly lacking and if that works as described it seems like it would fix it.

Can you specify which motherboard please? And I guess it's a pair of Nvidia cards fitted?

I'd like a setup like this in my future!

Sure. I believe the motherboard is an ASUS X470 Prime Pro. I picked it up at Fry's and I'm not home to look at what it is so I could be a little off.

It is indeed a pair of NVidia cards, but that part only matters a little bit. I don't really highly recommend NVidia for the host and as far as I know you can run whatever card you want on the Linux host. Looking Glass may care about the guest GPU simply because it's still a bit experimental, but there's not really any reason I'm aware of it can't work with AMD or Intel graphics processors.

Do the GPUs have to be the same architecture/model?

No, there's no reason I'm aware of that they would have to be similar in any way.

Last time I checked, Proxmox supports passthrough pretty well. It's just using qemu/kvm under the hood anyway.

+1 for the NixOS configuration tip!!! Glad to learn it's working well on Ryzen!

Thank you so much, that is extremely useful!

I went 2700X as well (paired with 32GB DDR4-3200 and an RTX2080), solid machine but the RTX2080 was not cheap (went EVGA since when you get into silly an extra 100 for the warranty/customer service and build quality hardly matters).

Did a 2600 build late last year... actually a little faster than my 4790K from 5 years ago, and a fraction of the cost. Waiting on Zen 2 to upgrade my own desktop later this year... hoping to see a 16-core mainline, or might wait for threadripper.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact