Hacker News new | past | comments | ask | show | jobs | submit login
The AMD 3rd Gen Ryzen Deep Dive Review: 3700X and 3900X Raising the Bar (anandtech.com)
383 points by neogodless 11 days ago | hide | past | web | favorite | 190 comments

Aww yisss, finally, tangible performance gains and huge value increases for consumers in the CPU space. Intel getting curb stomped is just a bonus.

I'm worried about AMD since AFAIK, the enthusiast PC builder/Gamer for who these chips are addressed is a niche market and most volumes are in OEMs laptops (Dell, HP, Lenovo) that companies buy in bulk for their workforce and that's where AMD has zero presence while intel has a deathgrip on those OEMs.

Can AMD still recover in this market?

> Can AMD still recover in this market?

Everyone I know building a new PC is using an AMD chip. It's cheaper for the performance. In addition, if the gains (both energy and performance) can be moved to the mobile CPUs, then most people would switch there as well.

In terms of OEMs, they have some super computers being built with their CPUs, Google using their GPUs for their gaming engine, and I suspect with that confidence placed in AMD we'll see even more. Just recently, Lenovo also released the T495, which has an AMD chip.

I suspect the move to AMD will be slow, but steady until Intel has an answer for this. AND even if Intel has an answer (10nm), they have had several large security bugs. I suspect many companies and cloud providers will move to or at least explore a more diverse set of chips just because of that. This means more people trying the AMD chips, at the very least.

I've been thinking about getting a T495, so I'm watching for reviews and whatnot (esp since the A495 had throttling issues), as well as watching for news on Zen 2 mobile processors.

It would've been nice to get the product launched in time for the fall, but hopefully we'll see it launch soon.

The T495 may beat the T490 in most things (if other laptops are any representation) but now is not the time to buy it in the U.S. For us, there’s only 3 prebuilt models to choose from as customizing isn’t an option yet.

I wanted to get one as well, however I might hold off until the next gen as there’s a chance of a Ryzen 2 chip and USB 4. And maybe a bigger battery ;)

I’d be curious of how the builds using the 3700 are doing, though. They pack a punch.

>I’d be curious of how the builds using the 3700 are doing, though. They pack a punch.

If I'm looking at user-submitted benchmarks properly, there's only a very slight difference compared to the 3500U. Seemingly not worth the upgrade, or am I mistaken about this?

The Australian store did not have the customisation options I wanted, so I purchased mine via the Hong Kong store and got it delivered to a family member there. Be careful though - the only way I got my E585 stable was to add idle=poll to my kernel parameters (which means I get pretty poor battery performance).

I asked Lenovo support about this, and the rep said a customizable version would be available withing two months. I'm in Canada, but I imagine the US won't be later than here.

I looked at the T495(s) and all the best AMD-based laptops recently. They aren't that good, certainly not better than Intel-based offerings, except for the improved integrated graphics. notebookcheck.net has some decent comparisons.

But the laptop parts (3700U) are still old-gen CPUs, not the ones everybody is happy about this year. Hopefully in 2020 we get some decent 7nm options with Thunderbolt, HiDPI screens and LPDDR4.

While they may be cheaper for performance if you compare just cpu, but if you include a GPU then it isn't that clear. The fastest ryzen with integrated graphics card is 2400g and it's only 4 cores. That's an extra $75-100 for the you.

I had 2400g and it isn't cutting it anymore. And I don't need a gaming video card to work with visual studio at 4k@60.

The 3rd gen Ryzen with iGPU aren't out yet

They announced the 3000-series APUs, but it looks like they're basically the same generation CPU-wise as the 2000-series CPUs, so I guess 7nm APUs with more cores will be sometime next year and be called 4000-series.

The new ones are zen+, so about 5% faster per clock due to cache improvements. They are also 12nm instead of 14nm. The other zen+ chips shrunk the transistors, but left gaps between them so the physical size was the same, but with a GPU shrink, a few more months, and all the 3xxx mobile chips also being zen+, they may have gone ahead with a physical shrink too.

If you're spending $300+ on a CPU, you're probably buying a GPU as well. And if you're not, and you just need something that's as good as onboard graphics you can probably get a old card for $20-30 on CL.

My last two builds were an i7-7700k desktop and an i3-8100 FreeNAS (FreeBSD based) build. The i3 will become a Linux box for other purposes once cheap Xeon Es become available on eBay.

The onboard graphics support is great. Now I can use my PCI slots for a 10gb SFP+ backbone and some other stuff.

There is no $30 card that can do DP 4k@60 and that's my dev machine. I ordered a Nvidia 1030 thats $80.

I doubt that any gamer need more than 4-6 cores, but single thread is very important.

I am getting a 3600 to replace 1700x in my gaming build and replacing 2400g with 1700x and 1030 in my workstation build.

> I doubt that any gamer need more than 4-6 cores, but single thread is very important.

This is no longer true, thankfully due to the influence of current gen consoles.

Games like Assassin’s Creed: Origin tend to be heavily CPU limited with less than 8 fast cores. Core count is really starting to matter in gaming.

Watch out for 3600G then. I don't know if there's a concrete plan to launch it immediately, but it looks like the additional $20 on it may give you what you need. The Navi should do 4k@60.

I realize that this proofs you from getting a PCIe 4.0 GPU of your choice much later, when you have the budget for it.

Of course, don't expect it to give you ultra level graphics for that small die size and heat dissipation.

I'm somewhat skeptical about the 3600G. The 2400G is already a bit bandwidth starved. Dual 64-bit DDR4 channels simply can't keep up with the CPU plus a large GPU. Adding channels isn't possible without changing sockets. There doesn't appear to be enough room on package for an HBM stack like Intel did with Hades Canyon. Running your RAM at 4400MHz with that 0.5x Infinity Fabric multiplier could probably allow a bump from 11CU to 16CU (or 20CU at lower clockspeed). A decent bump and enough to eat the entire low-end GPU market, but not incredible.

I'm left wanting a heterogeneous next-gen system using shared HBM for everything.

Intel currently sells the i7-8809G. It contains a substrate, 4GB HBM2, 24CU Vega M (14nm), and quad-core (3.7-4.7GHz, 14nm) with a TDP of 95w for an MSRP $546. If the substrate and HBM cost $100, then that's MSRP $446 for quad core plus 24CU GPU.

AMD currently sells the 2400G for $130 (launch MSRP for 3400G is $150). The 1300X (100MHz slower, but double the cache) costs $110 giving us around $20 for the 11CU GPU (the 2200G can be had for about $90 indicating the GPU price may actually be lower than that).

Let's build a next-gen APU for both desktop and gaming laptops.

HBM3 spec was finalized almost 3 years ago. Estimates for Vega put 2 HBM2 chips (8GB) at around $150 with a substrate adding another $25. We'll actually work within the constraints of HBM 2 at the moment. Just realize you should get 2x the RAM for around the same price (die shrinks and advances in 2.5D chip construction) along with lower voltages, higher speeds per pin, and potentially fewer stacks needed. We'll use HBM because it has enough bandwidth and uses simpler memory controllers while also cutting latency in HALF vs DDR [latency source](http://www.pdsw.org/pdsw-discs17/slides/stocksdale-pdsw-disc...).

We'll need 16GB of HBM2 at a cost of $300 (remember, that should become 32GB of HBM3 or lower costs closer to $150 for 16GB). If we cost $20 for 11CU, we'll assume $100 for 40CU (adding around a 25% markup for the larger chip though 36CU and 24CU lasered versions could greatly reduce that). That puts us up to $400. A 3600 6-core CPU is selling at $200. We'll add another $50 to the MSRP just because for a final price of $650.

That is $100 more than the MSRP for the i7-8809G, but has a TON of advantages. It has 2 more CPU cores and almost 2x the GPU power (with access to more RAM if it needs). There's no need to spend another $100-200 for additional DDR4 plus the CPU performance will be better due to reduced latency. The lack of RAM on the motherboard and getting rid of all those traces should reduce motherboard cost and size. Finally, at 7nm, this should all fit inside a 95w TDP suitable for both desktops and gaming laptops (which usually have 35-45w CPUs plus 50w+ GPUs).

GT 710 does 4K@60 for 43$ new from Newegg.

I doubt that’s the cheapest option though.

I don't believe a 710 can do 4k@60 via DP. 730 may be, but finding a card with DP under 50 is hard.

Nvidia says 710 can do both, 3840x2160 at 60Hz supported over HDMI. 3840x2160 at 60Hz supported over DisplayPort. https://www.geforce.com/hardware/desktop-gpus/geforce-gt-710...

But it’s up to the manufacturers to decide what ports to include.

PS: That said it does seem like there is a premium associated with DisplayPort, are you sure you are not also paying that premium when looking at Intel motherboards?

> I doubt that any gamer need more than 4-6 cores, but single thread is very important.

That's largely false for modern engines.

> If you're spending $300+ on a CPU, you're probably buying a GPU as well.

I’m not a gamer. I don’t give a shit about any GPU performance I couldn’t have on intel integrated 10 years ago.

I just need compute, and I hate to have to pay 50-100 usd extra for a completely overspecced GPU that will idle at 0.0000000001%.

GPU is compute. The list of use-cases that require lots of CPU and negligible GPU performance are shrinking by the day. Video and photo editing are heavily GPU accelerated, as is 2D and 3D CAD/CAM. Scientific computing is increasingly reliant on GPU performance and even your browser is GPU accelerated. I suspect that an increasingly large proportion of developers will start demanding fast GPUs as the usefulness of low-precision computing expands.

It just doesn't make sense for AMD to waste a lot of very expensive die area on a crappy GPU that most of their customers will never use, for the benefit of a very small minority of customers who resent paying extra for a cheap dGPU. The Zen 3 APUs should offer up to 8c/8t or 8c/16t with a fairly capable Vega GPU, which is a far more compelling value proposition for the vast majority of consumers.

I used to think that way until I tried to run 4K@60hz on my previously owned 2015 13" MBP.

Stuttering sucks. I went with the 15" with dGPU on my new MBP and the problem went away.

Gamers don't care for an iGPU, honestly.

I find this is a good move by AMD. Have 1-5x CPU models with IGPs, and keep the lineup clean and compact. Not like Intel, with their, what, 50 iterations of different types per generation? I think it could even be hundreds.

I'm not sure if it was part of their business strategy to overwhelm, but they sure did, and this didn't make them any more sympathetic, on the contrary...

Realistically developers are the only niche that needs as much CPU as they can get with virtually no GPU requirements. All other CPU-intensive applications tend to heavily profit from powerful GPUs.

If it worked well enough, I'd take it.

These are not CPU's you would ever really use with IGP

> I'm worried about AMD since AFAIK, the enthusiast PC builder/Gamer for who these chips are addressed is a niche market and most volumes are in OEMs laptops (Dell, HP, Lenovo) that companies buy in bulk for their workforce and that's where AMD has zero presence while intel has a deathgrip on those OEMs.

Intel's deathgrip rests on four factors: - Intel's "deal sweeteners" (preferential pricing et al), which they can't afford to do forever. - Intel's ability to provide the whole stack (chipsets->boards). - Corporate IT's sclerotic processes (not always a bad thing BTW though personally I find it stifiling). - Server chip architecture.

It will all come down to money. If AMD parts have good performance/cost/power consumption points, you'll get AMD laptops (is thunderbolt 3 silicon available for AMD? I don't know).

If they beat Intel on server performance why wouldn't everybody switch? AMD doesn't appear to have the full AVX instruction set but I'm not sure how many people really use that.

Corporate IT's process is mostly about money, holistically (does it give us volume pricing? Does it reduce support cost?) as corporate IT is always a cost center. So they'll switch if it's worth it.

So you're right that Intel will continue to be dominant for a (relatively) long time, but for the reasons above that's more about market hysteresis than fundamental superiority, which Intel still has in certain places but which has eroded mostly across the board for a long time. And the kicker of hysteretic phenomena is when you reach the critical point you can't really get back.

>Corporate IT's process is mostly about money

That is so far from truth. Cooperate IT is all about not getting fired and doing no wrong. It has very little to do with performance, power, or cost. Your CTO could make an excuse of not buying AMD because it is not stable. That was the reason in the old days AMD not getting any traction.

There is a huge brand stickiness in Corporate purchase, and people are willing to pay premium for it. And that is why everyone want these client.

> Cooperate IT is all about not getting fired and doing no wrong. It has very little to do with performance, power, or cost.

From what I have seen this rule is mostly about software which is vastly more complex than hardware for typical IT departments. So that 'not getting fired' is mostly about software. E.g our company still buy very expensive licenses from IBM, Oracle, SAP and so on. However as far was hardware goes it is super crappy HP laptops despite devs complaining about disk crashing, freezing and myriad other problems everyone still keep getting those.

That's how the old saying goes: "no one ever got fired for hiring IBM".

> is thunderbolt 3 silicon available for AMD?

This is the only AM4 motherboard that I’ve seen so far with Thunderbolt 3:


It’s “available” but hasn’t been adopted yet by most motherboard manufacturers. Maybe we’ll see laptops with TB3 when the 4000-series mobile chips come out.

There's no need to worry about laptop & mobile CPU market share. It's a very low-margin market, and Intel has such deep pockets that they can bleed competitors out if they need to. What AMD needs is server CPU market share. As more computation moves to the cloud, that's where all the money is. It's also in the datacenter where AMD's price/performance and performance/watt advantage makes the most difference for potential customers.

Now that AMD has a solid foot in the enthusiast mindshare with Ryzen and ThreadRipper, it's time for them to strenghthen the EPYC lineup and seriously compete with Xeon.

This. I recently made the leap (financed by work, of course) to "server grade" hardware after years of cobbling together my own boxes from consumer parts. The price difference is phenomenal. Like eating at McDonald's vs. catering a wedding. The idea that essentially one company had this market locked up for all those years boggles the mind.

>Can AMD still recover?

I’ll be honest, I was worried for them a few years ago, but they’re still in business and have a killer new chip so of course they can recover.

If Zen 2 is as good as it seems, it might make some inroads with volume OEMs.

Also, Oak Ridge’s new exascale supercomputer will use AMD EPYC:


It's good timing for the consumer market too, with Win7 about to go unsupported.

Well AMD has a "death grip" in the console market. That's kind of their equivalent. If they gain access and spit the Intel monopoly in the laptop market then AMD can become Intel faster than anyone could think of.

Nobody wants the console market in the first place, it's low-margin high-touch garbage. It's a good base-load, but if you could, you would rather sell that wafer worth of chips to datacenter applications or in a glossy box to gamers.

I wonder if the wafer angle is less the case because AMD is fabless. Intel, indeed, would lose the capacity, but AMD can farm out the Xbox One to a second-tier manufacturer or process while still cashing the cheque.

I also suspect there's a market positioning long game in controlling console chips. Every AAA title and major engine for the next 5-10 years is going to target 8-thread AMD CPUs, so you'd expect lazy ports to perform well on Ryzen for free.

It's not so much targeting 8-thread AMD CPUs but targeting multicore x64 CPUs in general. The differences between operating systems account for way more than the differences between Intel and AMD CPUs.

I’m helping out with Corporate IT procurement and the first thing we are asking OEM’s is how we can get these chips in new models.

Buying 600 a year minimum gives you some pull. Hopefully more companies are asking these questions.

Hmm, 600/y really isn't that much - curious to know what kind of discount you've been able to negotiate for that volume?

>Can AMD still recover in this market?

There are a few things to keep in mind. The absolute vast majority of the $1000+ laptops market belongs to Apple. And most of the other laptop CPU are low price and comparatively low margin. AMD is already competing in that area with their APU using GF 12nm, Those 12nm wafer capacity first and foremost goes to I/O chip for Zen 2 especially EPYC, which is where all the money are made.

Their next Gen 7nm APU, will be going head to head against Intel's 10nm IceLake. So while Zen 2 could enjoy its performance lead on Desktop, it is highly likely to lose against Icelake. And Icelake is suppose to have laptop on shelf by this Xmas, I don't expect the 7nm APU to be announced until next CES and product shipping in Q2 2020.

Sometimes I do wish AMD could work out a deal with Apple to use their CPU / APU in their product line. It will have a lock on effect across the industry, if it is good enough for Apple. It is good enough for everyone else.

Uh, Apple has less than 8% of the market[1]. Why do you think it has the "vast majority" of sales? I'm assuming there's confirmation bias here but the vast majority of laptop sales are definitely not Apple, and not even at the high end—even if we exclude corporate sales.

[1] https://9to5mac.com/2018/11/21/mac-market-share/

"The absolute vast majority of the $1000+ laptops market belongs to Apple"


This seems likely to be extremely overstated or false just looking at overall market share.

> The absolute vast majority of the $1000+ laptops market belongs to Apple. And most of the other laptop CPU are low price and comparatively low margin

Eh? You make this a statment, but without a source, I really fail to believe it.

I would guess that most laptop purchases are by business, and I would guess that most of those are over $1000ea, and I would further guess the overwhelming majority of those are not Macbooks.

> while intel has a deathgrip on those OEMs

New Ryzen Notebooks are appearing. A friend of mine has one and is pretty happy with it (apart from some touch screen issues on Linux, which have been fixed in the latest kernel).

I don't know much about AMD's OEM strategy, but it seems like they're making some progress. What incentives can Intel still offer OEMs to go with them?

I was amazed to see some Lenovo laptops with AMD processors. Of course, not the flagship models, but still...

Maybe not flagship, but still under the highly reuptable Thinkpad brand. I would love to see some more Ryzen Thinkpads.

Not just Thinkpad either, but T series, T is the mainstay for businesses (certainly by volume), that shows a lot of trust, E lines are entry (and largely to be avoided if you can), X is thin/light and P is workstation class (Xeon's and Quadro type stuff) - Though the newer T's aren't that much thicker/heavier than the earlier X's so the lines are blurry.

The T495 has an AMD processor, not exactly flagship, but very close (T490/T490s is their flagship).

The flagship is the X1. The T4x0s is the budget version of the same form factor.

They’re the same price, just different specs. The T-series is more for enterprises, I don’t know, but I bet both lines have roughly equal sales.

All I've seen says otherwise. The X1 is more expensive and has a more premium construction. The T4x0s approximates it in some specs and form-factor but is still behind on both. The T4x0s seems positioned so that enterprises can distribute it broadly and get the size/weight advantage without the cost of the X1. At least the enterprise I work for does that, using the X1 only for the executives.

No, the X1 series is Lenovo's answer to the MacBook Air and other ultraportables, whereas the T4XX series is their answer to the MacBook Pro, with the T4XXs series being the more premium variants.

Also (though they don't do it now) the T4xxP which I think I bought the last model they made(i7-7700HQ w/ 14" 2560x1440 and support for up to 32GB of DDR4 - was a little speed demon two years ago and still is really).

I had a low end one like 7 years ago and I was pretty happy with it. The integrated graphics were way better than Intel had back then and it worked pretty well with that GPU accelerated UI Windows got starting with Vista.

Lenovo has at least introduced a bunch of AMD laptops in the Thinkpad lineup this year. T495, X395, E595, E495 at least. These correspond to T490, X390, E590 and E490 which come with Intel CPU.

I desperately want an AMD laptop with the latest architech, the battery gains are amazing from what I saw in the conference.

I've held back buying an Acer swift for this reason alone.

You'll have a while to wait. Ryzen 3xxx APU and mobile chips use zen+ on 12nm. Zen 3 launches early next year, so you'll probably have to wait until then for zen 2 mobile chips.

Enterprises tend to move slowly, but leaving performance and money on the table is stupid, the small companies will adopt, then it will percolate slowly to larger companies.

As all technology does.

Where are you getting "curb stomped" from?

From the conclusion:

"When it comes to gaming performance, the 9700K and 9900K remain the best performing CPUs on the market"

"Ultimately, while AMD still lags behind Intel in gaming performance, the gap has narrowed immensely, to the point that Ryzen CPUs are no longer something to be dismissed if you want to have a high-end gaming machine. Intel's performance advantage is rather limited here – and for the power-conscientious, AMD is delivering better efficiency at this point – so while they may not always win out as the very best choice for absolute peak gaming performance, the 3rd gen Ryzens are still very much a very viable option worth considering."

This isnt to take anything away from AMD and people will finally consider them now as a serious alternative, but I would not call this a "curb stomping"

A curb stomp would be Intel vs pre-Ryzen AMD

AMD processors are reasonably cheaper, draw less power and get less hot for extremely competitive performance (especially in multi-core where they exceed).

I guess it depends on where you're defining 'curb stomped'. As a whole, I think AMD is besting Intel here.

The "curb stomp" is how few people will actually make the choice to go with Intel at this point if they're making a fully informed purchase decision.

The Ryzen 7 3700x actually beat the 9900K in one game benchmark, and it's so close in performance that its much lower price is going to be attractive.

The Ryzen 9 will also outperform the 9900K in basically everything as soon as you start Twitch streaming.

And then for every other task not related to gaming you're completely destroying the 9900k.

All with better thermals, a usable stock cooler, and potentially cheaper motherboards saving you money over Intel.

And this is with first week BIOS and drivers.

In everything apart from gaming FPS, the Ryzen 3900X is much faster than the 9900K.

Does anyone have any insight on whether AMD being included in the 2-3 major gaming consoles being produced in 2019 will help to sustain marketshare? I believe Microsoft, Sony, and Google have all announced their next gen. consoles / cloud platforms will use AMD.

AMD has been in consoles for years and it didn't help their PC market share.

It did, however, keep them alive.

Dell ships at least 2 systems with Ryzen 'pro' CPUs, and servers with Epyc CPUs. I know because I've ordered the desktop system for our lab and they work fine, and they were a good value.

Buyers always like to encourage competition.

The new Ryzens perform very well, it's even more impressive considering that the Intel chips did not have the Fallout/Zombieload mitigations applied[1], that come with significant performance degradations.

LinusTechTips, that used to somewhat favor Intel, did their benchmarks with current patches to everything: https://www.youtube.com/watch?v=z3aEv3EzMyQ

[1] https://twitter.com/RyanSmithAT/status/1147867180282699782

Hah, LTT is really milking the hype in that one. Good show.

I can't say they're wrong to be excited, though. When's the last time a processor launch really mattered to anyone? The improvements to compile time alone have me sold, though it will be tough to pick between 12c/16c.

The biggest component missing in these reviews is where and when I can find these things on shelves. Especially as someone not terribly concerned with gaming performance.

You can buy them in most MicroCenters today, if they did not all sell out. Amazon has a 3600 and Newegg has 3600 and 3700 but not 3800 and 3900 (don't know if they sold out or if they just don't have them yet).

The short answer is they are available now, if they don't get sold out from your preferred retailer.

I believe the 3800s are not yet available anywhere, but 3900 might have sold out.

No MicroCenter here in south bay, sadly. That’s alright, though, I am mostly curious when it’ll start showing up.

I didn’t really plan to fire off a purchase today, but I did stop into my local computer store and they did not have any yet. So it seems supply may be at least a bit limited for now.

>You can buy them in most MicroCenters today

Off Topic, do people still buy components in Retail store and not online? I could understand people want to shop clothes, food, thing that you need to take a look first in Real. But Electronics, especially components, don't matter.

You can get things same day, returns/replacements are much easier, price matching is often an option, you don’t have to worry about trickery with Amazon sellers or ambiguous listings (like laptops with similar yearly models) and sometimes outright lying listings, mixups, shipping problems (I had an entire computer go to the wrong apartment,) package theft (had a Unifi USG stolen even though it had been sitting for under a couple hours!) and honestly more.

Heavy stuff through shipping can be expensive if you can’t get it through Amazon Prime. Shipping with Li-ion batteries is restricted to ground shipping.

I still ship many of my electronics, but yeah actually it’s not a wash. Especially if you are buying used stuff, I find eBay is often a ripoff.

Sometimes, you want it now, not in whatever the best turnaround time shipping will give you is.

The last two times I went into a retail store for technology were when I wanted flash drives sooner than Amazon would turn them around for me, and when my laptop catastrophically failed and I discovered the old one I use as a fallback was also broken.

Lately I buy all of my components at retail stores. Something about going and holding two different boxes and comparing the specs without any ratings or stars or "people also bought/looked at this" is more satisfying to me.

I have lived in places where there really were no computer parts stores anymore. I would be fine buying components online -- I'm not opposed to it -- it's just that I like to go shop in physical stores while there are still physical stores left where I can do that. :)

I'm very impressed by the performance of the 3700X given its extremely low power consumption. AMD has historically delivered subpar mobile processors that had a higher power consumption that Intel counterparts. A processor with a TDP of half the 3700X's will be very competitive if the graphics module consumption can be contained. I'm looking forward to Zen 2 mobile CPUs although I do not know when they will be announced.

Yes, the performance per watt is a huge jump: https://hexus.net/tech/reviews/cpu/132374-amd-ryzen-9-3900x-...

Agreed. And Bulldozer was an ultimately underwhelming run, but AMD still set the stage during that time for many technologies and practices that continue to raise the tide for consumers. Mantle API was opened to Khronos and became Vulkan, and opening their hardware has allowed open driver development to keep parity with the proprietary, negating the GPU vendor as middleman problem.

And not all of the A-series were awful, but it seems those were the favored ones for OEMs to put in the very popular $300 Ultrabook competitors. I'd also hazard to say that if we went back and benchmarked the Intel CPUs of that time with the mitigations of today, they wouldn't have fared better than AMD's.

We've come a long way from the world before smartphones, parallelism, SoC, and affordable flash, and I'm just glad we're finally here.

>opening their hardware has allowed open driver development to keep parity with the proprietary, negating the GPU vendor as middleman problem

Except until recently, AMD GPU performance was a bag of wet farts for both industry and gaming.

That's not accurate. AMD only recently lost the ability to compete with Nvidia in the high end for a while. But they still had cards like the RX 580, that are the best option for their budget, and Fury, Vega and the Radeon VII all were not slow. They just were not better than Nvidia's cards. See https://www.pc-kombo.com/benchmark/games/gpu (I run a meta benchmark that includes those cards).

Also keep in mind that Intel and AMD have different ways of measuring TDP, the difference would be greater with a standard measure.

One advantage of AMD laptop processors are their superior integrated graphics, while not competing with a dedicated chip, outperforms anything integrated that Intel has to offer, last I checked.

Shame about the idle power consumption, though, which seems to have rather suffered this generation.

I've only seen that with total system power draw. X570 chipset is power hungry, check again in a few months for a test with a B450/X470 chipset.

As I understand it the I/O die on the actual processors is basically the same die as the X570 chipset, and there's no way of avoiding the power draw from that. Plus there's the question of what kind of performance you can expecct with lesser chipsets and motherboards.

There are benchmarks. There is no performance difference between a B450 chipset and X570.


The improvement in the Dolphin benchmark (which probably indicates improvements in other JIT-based emulators/runtimes) is pretty impressive, especially compared to first-generation Ryzen. For a while that benchmark had every AMD processor performing worse than every Intel processor in a given review.

If the Chromium compilation benchmark (which reportedly was screwed up and will be added later) shows similar improvements, I think this generation will finally convince me to replace my Ivy Bridge box.

edit: I just saw the Phoronix benchmarks, which show LLVM compilation going from 547 seconds on 2700X to 406 seconds on 3700X. I'm not sure whether those were run with the same chipset/RAM/storage versus the 2700X result being an older one pulled out of the database, though.

Dolphin is probably very happy with the improvements to branch prediction.

As for compiling, Gamers Nexus' review of the six core 3600 had it absolutely trashing other CPUs (33% faster than the 9900k, 11% faster than the 2700x) in their GCC self compilation test.

The AMD subreddit has a megathread with all the comparisons on it:


It doesn't appear to work well with Linux, aka wont boot. At least current / new distributions of Linux... (at least ones utilizing systemd)


I suspect this will be fixed soon, since chips just came out, and a lot more people will look at it, besides AMD themselves.

Terribly tangential, but is there a possibility at all that the higher core counts will converge with GPU core counts in the future, in a large CPU, many smaller medium cores, and hundreds/thousands of micro cores.

Programming those would require a new paradigm possibly.

I'd say no (but you might need to clarify what you consider a core to be and whether you're confusing it with a thread). The way that SIMD processors work is fundamentally different from the way that a CPU works.

GPU cores run the same operations across a large number of "threads" (32, 64, etc) - they're no really threads.

When you have branching code running on a GPU, the threads are split into the branches, and each branch gets executed separately.

CPU threads run independently of each other (there's often locking for access of shared data, but that's not what I'm talking about).

Even when multiple threads on a single CPU core are running (SMT) they're still not performing the same operation.

Per thread (it's not quite what they are, but a more appropriate terminology escapes me) there is and (unless it's a very stripped down CPU) will always be a far greater silicon overhead for CPUs compared with GPUs.

That wasn't the best description of the way things work, but I'm rather tired and not a hardware guy.

> When you have branching code running on a GPU, the threads are split into the branches, and each branch gets executed separately.

I'm not an expert on GPU hardware but in my readings on this, I've seen it said that if there is a branch in your code, all cores take that branch even if it is a no-op for many of them, thus all branches are taken by all cores when the code is not coherent. Hence why making GPU parallelization can be quite a different programming paradigm to really take advantage of how it works, and it is more challenging to do properly than truly independent CPU threads that do not affect each other when they branch.

That's right! It's called "wavefront divergence".

Ideally all "threads" of the wavefront take the same side of a branch, so it can skip the not-taken side of the branch. Wavefront divergence is when a single wavefront has threads that take different sides of a branch, so the whole wavefront runs both sides and masks out the results based on branch direction per thread.

Thanks for the reply - I hadn't actually thought in any depth about what's going on, and I'd like to get in GPU programming at some stage.

Also, sorry about the barely prompted and completely unjustified wall of rambling text - this is something that I really should've given more consideration to already.

That's a programming model that has the potential to go very wrong, very fast if you don't think in depth about what you're doing. Branches with branches are going to be very problematic (with exponentially decaying throughput), although multiple branches at the same level are handled very cleanly.

To be honest, I don't know how I thought that it might be rescheduling things instead.

The sort of rescheduling that I seem to have been thinking of, could only make a difference in cases where the batch size exceeds the wave size, and my guess is that the factor of difference would need to be large to make an impact.

At the base case, NOPs and rescheduling would perform identical operations - so all the scheduler would get you is a hardware overhead (the time overhead could be mitigated when running on a single wave).

The scheduler would also introduce latency since different waves in the same batch would need to wait for each other before rescheduling could proceed.

You'd cause problems for your memory layout - programmers referencing values from threads that have branched would probably need to be treated as undefined behaviour.

You'd also need to introduce a stack, per thread, to keep track of the wave histories - allowing you to unsort and re-reschedule at the ends of branches.

All this to run parallel execution on what seems to be uncommonly large batch sizes (I believe AMD just dropped their wave size down from 64 to 32 - I don't know if this is because 32 is a Goldilocks batch size, or if it's simply to achieve better performance on Nvidia optimised applications).

> all branches are taken by all cores

Perhaps this should be threads, but certainly not cores - it's a single core running the same operations on different data across multiple threadish things.

I'm not sure if there's a more technically accurate terminology for quite what they are.

That is basically what a GPU is. However NUMA architectures are often really "NU" so the GPU doesn't have all the cache infrastructure that a CPU is because of the CPU's typically MIMD workload.

If you're interested in an earlier mass market phase of moving GP computation of the von Neumann CPU approach, check out the PlayStation 3's "cell" architecture. Devs really struggled with it and Sony went back to von Neumann for the PS4.

I think that’s unlikely to happen. See what happened to Intel’s Larrabee / Knights Landing / Xeon Phi. They have 50-70 x86 cores (initially Pentium, later Atom) on the chip, with extra AVX512 vector units.

CPU cores are spending a lot of transistors minimizing latency: branch prediction, microops fusion, sophisticated multi-level caches. GPUs are fine without most of that (they do have caches but much simpler ones), they are spending majority of their transistors and electricity on ALUs. Instead of fighting latency, GPU cores embrace it and optimize for bandwidth on massively parallel workloads: they have cheap hardware threads so they switch threads instead of stalling the pipeline.

In that case it would make more sense to move the smaller cores to a PCI card a la Xeon Phi.

The programming paradigm to enable this exists, it's just pure functional programming. But people are intimidated by Haskell.

We need more investment in functional programming before the dream of a 1000 core computer can be realized.

On paper.

Parallel programming in Haskell is hard. GHC's style of by-demand lazy evaluation and the ubiquitous use of monads impose a lot of sequential execution.

There has been good research around parallel FP programming languages, but that was mostly around the 90s (Sisal, pH)

> The programming paradigm to enable this exists, it's just pure functional programming

Whether more cores can be used is ultimately down to the problem, not any language. Some problems are naturally paralisable, some just aren't.

FP maybe exposes a bit more parallelism, but it may introduce more overheads such as less efficient cache use. FP is not a solution, it may be part of the solution.

A lot of the software still has optimisation with Intel specific in mind, and for AMD to score close or equal to Intel in most single threaded benchmarks is an amazing achievement.

The Radeon 5700 is also amazing and seems to be better than the new / Price Reduced 2060 Super.

I am wondering what are the chances of Apple choosing AMD in near future.

What's most exciting to me is how the new processors seem to do particularly well in less optimized benchmarks. From my experience these are more representative of day-to-day PC use, and I feel like the last decade of CPU releases has focused more on identifying which well optimized applications perform well i on which processors.

Ryzen 3 seems like a very strong improvement to general computing.

Why didn't they measure system idle power consumption? That is a key metric for energy efficiency, since home computers spend a lot of time doing nothing.

pc perspective rand idle power consumption benchmarks: https://pcper.com/2019/07/amd-ryzen-7-3700x-ryzen-9-3900x-re...

So they're about 10 watts better than last-gen Ryzen, but still 10 watts worse than Intel.

Well, they sold out instantly! Good for AMD. I'll be excited if I have the opportunity to build a workstation using a 3900X. That's such a sweet spot for a 12 core to price for media work. Unless someone buys me a Mac Pro, I'll probably be crawling back to Windows / AMD with this release now that they have motherboards with TB3 and 10GbE on them.

The radeon 5700/xt is also amazing.

That really was unexpected, unlike the CPUs which we already knew would be good.

The 5700 costs almost $100 more than the Vega 56 while being just a few % faster. The price:performance ratio on the new lineup of graphic cards is worse than the previous generation...

The 5700 is $349[1], the Vega 56 is $299[2], that's a $50 difference, not $100.

The 5700 gets 36.0 fps in Shadow of the Tomb Raider 4k, the Vega 56 gets 30.2[1]. That's a 19% improvement.

Also, the Vega 56 was originally priced at $399[3]. So the price:performance has definitely improved since Vega 56 was launched.

[1] https://www.anandtech.com/show/14618/the-amd-radeon-rx-5700-...

[2] https://www.newegg.com/powercolor-radeon-rx-vega-56-axrx-veg...

[3] https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-...

The 5700 is "only" twice as fast as the 570 while not using more power.

And according to Gamer's Nexus the reference blower cooler is abysmal.

"Abysmal" in what way? Enthusiast sites often confuse sufficient cooling with under-cooling. Cooling becomes more efficient when there's a high temperature differential. So engineers set up fan control so it only _really_ begins to kick in when the chip reaches 80C or thereabouts. Enthusiasts often confuse this with insufficient cooling. The chips themselves, however, don't really mind.

IIRC it was 95C under load with 52db. So loud and hot. Normalising fan speed to 40db made it throttle. I'd certainly call that abismal.

That does sound pretty bad, if true, but more on the noise front rather than temp, assuming 95C is Tjunction and not Tcase.

I don't know much I trust GN anymore. Used to be the first site I'd read reviews from, now I actively avoid it. Steve just seems to want to draw attention for the most minor things (be it against Intel or AMD). He's the boy that always cries wolf, so I really don't trust his objectiveness anymore.

Don't buy reference design for AMD GPUs. Just wait for Sapphire to make custom cards. They are always a lot better.

3700X looks like a great CPU for an all around work station at 65W and 16 threads.

It seems that 9700K still beats AMD in many workloads. I wonder if the comparison is with spectre et. al. mitigations on Intel turned on or off?

The thing about the 9700K (which they point out) is that it doesn't support hyperthreading. Then it even beats the 9900K by a significant amount in a lot of the same tests, even though the 9900K should otherwise be the faster one, indicating the reason is that those workloads suffer a performance detriment from enabling SMT.

It should be interesting to see the results of those tests with SMT disabled.

It probably is, though they didn't explicitly state that. Windows 1903 is used, and AMD's latest strategy to improve Windows performance is in place.

Basically, if the higher clock of the 9700K can be put to use in a single-threaded application, then that chip can still come out on top. But in enough scenarios, the Ryzen 3700X and 3900X are quite competitive.

From comments on the article:

" They are with Spectre and Meltdown mitigations. They are not new enough results to include anything for Fallout/ZombieLoad."

Not sure if including the latter mitigations would make AMD beat Intel in single threaded.

Single threaded performance isn't really that interesting anymore except for some specific gaming benchmarks, and even in gaming, you're unlikely to be bottlenecked by single threaded performance (though memory latency could be an issue).

I agree. Just answering the question in the thread!

It is still important in compilation. That means it affect developers work many many times per day.

Compiling has generally been easier to parallelize I thought? Using make -j<whatever> has been around for a long time. If you're only compiling one file then it's harder to get an improvement, but a single file should be quick anyway so there's not a huge need to speed it up. It's not like you're doing something silly and including boost are you?

In large projects, that are multiple dependency chains, that cannot be fully parallelized. There's a critical, often a bit long path, for which single threaded is still very important. Another thing is that various platforms like nodejs, etc. may not be able to utilize parallelism much, adding to the ST importance.

Web browsing is still single threaded. Games still have a very busy main thread.

1 UI thread !== only 1 thread

Firefox and chrome use a half-dozen or so threads parsing and compiling JS code so it loads quickly. Firefox's Quantum work adds a lot of threads for rendering as well. In addition, service workers, web workers, and Wasm all spawn additional processes which serve the same purpose if you aren't sharing memory directly.

That said, the Phoronix Selenium tests show AMD ahead by quite a bit. (source)[https://www.phoronix.com/scan.php?page=article&item=ryzen-37...]

So Servo is just snake-oil?

Firefox's quantum work comes from their servo work, so I'd say it's been valuable.

Firefox Kraken is faster on the 3700X, for example. It'll come down more to which browser you use than the processor.

> Web browsing is still single threaded.

Only in the narrow sense that processes are not threads. See `Window > Task Manager` in Chrome.

Yeah the benchmarks aren't quite the clear AMD win (in performance/price) I was expecting for gaming. That said, it's important to factor in all the other stuff such as motherboards and coolers. Until there is a cheaper chipset, I expect a B550, the scales are tilted a bit further towards the 9700K. The CPU prices are intersting, but for anyone upgrading the total upgrade cost is what counts. So the minimum to compare would be CPU, cooler and motherboard.

It's my understanding that AMD's stock coolers are actually quite usable and fine for most people. So that might be one less price point to consider.

But comes to C/P value, Intel is totally ruined.

Not to worry. Dev community will quickly release new and shiny interpreter of interpreter. That'll take care of performance gains ;)

Are these actually on sale yet? I don’t seem them listed an Amazon.

I’ll be using this soon to build a MAME arcade cabinet, probably along with the new GPU.

One thing Intel has the advantage is in the engineering desktop. I use a couple of programs that are compiled with Intel compilers. Also one that uses AVX 512. Will Intel compilers optimize for AMD?

Newegg seems to have the 3600x and 3700x for sale, but the 3800x and 3900x currently say out of stock.

Anyone know when 3950x reviews are due out and better yet when the chip might hit stock here in the EU?

The launch date is in September.

So... not to sound like a naysayer here.. I mean, I do work for Intel, but I'm genuinely interested in the product.

Are these really, y'know, launched launched? I can't find any 3000 series CPUs for sale on either Amazon or Newegg... Who's stocking them?

Already sold out most places, it seems.

But who stocks it? Amazon and Newegg don't (i.e. you can't look the products up at all as of right now, even just to see that it's out of stock), which implies that they're filling the channel only with exclusive retailers or OEMs. Or, less charitably, that this is a paper launch with product to demo but not to actually sell...

The less broadly reviewed 3600X (which should completely beat the i5-9600K) is still in stock on Amazon.com, https://www.amazon.com/AMD-Ryzen-3600X-12-Thread-Processor/d.... 3600, 3700X and 3900X are completely missing there though. But Newegg and B&H list those (Newegg also has the 3600X in stock), so the others will be in stock at one point. In Germany alternate.de has them listed as being shipped in a few days (it's already possible to buy them now), with the Ryzen 5 3600 being in stock currently. So while there is limited supply, the hype and demand around those processors is big and they are somewhat available, this is no paper launch.

I can see the 3700X and 3900X stocked on Amazon Canada, UK, France, Spain and Japan. They seem to be out of stock in Amazon Germany. While Amazon US and Italy haven't had any stock yet.

How far did you look? I've already bought one of these new chips in Australia from a local computer store. I don't know what your definition of "launched" is, but it's obviously much more exacting than mine.

They were in stock but sold out instantly. The demand for these products is really strong. Inventories will be filled and instantly depleted for a month or more.

It's not a paper launch at all.

Can anybody explain the poor memory latency numbers? Is this due to the new L3 and chiplet design or is it from the larger L3? Intel chunks their LL cache out too, so I wasnt expecting such a big difference.

Also AMD got stomped on the system performance testing (those tests seem like they might be difficult to unpack into individual reasons given their breadth). Why would that be the case.

And WOW. Any vectorised workloads written for AVX-512... Besides the larger operand sizes, the instruction set changed seem to matter hugely. I did a small not using avx512 too effectively scan hash table for the keys, and it made a huge difference too.

Still considering the top Ryzen chip for a sorely need new desktop, but I don't play games so it would mostly be a cost thing.

If I understand correctly, the higher memory latency is a between-chiplet problem for the most part, with the highest latencies being between CCX's.

If you don't game, I see absolutely no reason not to buy Ryzen, but you might not need to buy an X570 mobo.

Intel's L3 is sharded out into 2mb chunks on a ring bus too so I would have expected more comparable numbers. Is Intel's QPI link and cache coherency implemention that much better?

According to https://www.anandtech.com/show/14605/the-and-ryzen-3700x-390...

They are rerunning all tests due to a mobo firmware upgrade showing non trivial improvements in turbo/boost mode.

There is still a chance that Intel with be beaten, even without taking ZombieLoad etc. patches into consideration on Intel.

> Incorporating a significantly upgraded CPU architecture and built using TSMC's latest generation manufacturing process

How much of their innovation is due to TSMC vs AMDs in house research?

The ~15% IPC improvement is due to AMD while the frequency and power efficiency are more attributable to TSMC.

Interesting that they didn't choose Global Foundries. I guess splitting that off really was worth it.

One of Intel's big strength is their fabs. But perhaps that's becoming another drag on execution?

Global Foundries will never have 7 nm and Intel has failed to deliver 10 nm for years, so yeah, being tied to one fab looks like a bad idea.

Intel's 10 nm was delivered early last year in the form of one U-series CPU with a broken iGPU - not that it were a great chip, but they did deliver. Ice Lake mobile CPUs, built on the 10 nm+ process, are shipping as we speak, with Dell's XPS lineup already being available with said chips in them.

Got a source for Ice Lake products that are actually available? Dell's XPS 7390 was supposed to be one of the first, and it's still "coming soon": https://www.dell.com/en-us/shop/dell-laptops/new-xps-13-2-in...

I'd like to see a comparison running different speeds of memory, different CAS latencies, etc. Can you run this thing safely with 3200mhz? or do you really need 3600mhz?

There is one from a german magazine, https://www.hardwareluxx.de/index.php/artikel/hardware/proze.... You can use DDR4-3200 (that's also shown by most other benchmarks using that speed), but you get a performance increase from using DDR4-3600.

Curious why the TDP is the same for the 8, 12, and 16 core. I assume it's binned parts, but the switched off cores still consume power? Or some other reason?

The reported TDP is the designed TDP for the motherboard power delivery, not the measured consumption of each processor. To keep the number of TDP values low and to allow motherboard makers design a small number of variants, TDP values are rounded up to 65W, 95W and 105W. That means everything that goes above 65W in real life will be classified as 95W, even if the real consumption is 70 or 75W.

Not really. Intel made TDP numbers meaningless long time ago, AMD finally joined Intel with Zen2. While Zen1 180W TDP Threadrippers actually did consume 180W from the power supply, Zen2 take x1.5 the rated power similar to intel counterparts like "95W" 9900K consuming 150-200W.

Actual power cap for Zen2 seems to be 1.35x TDP, but the speculation about them choosing a small number of TDP categories and sticking all their processors into them sounds right aside from that.

Actually I think the anandtech page says that the 65W part consumes 90W at full load? Or so I read their graphs on the power consumption part...

TDP is measured at base frequency; with boost it can consume much more. Yes, TDP is a strange measure, I just explained how different CPUs have the same TDP.

You explaining it doesn't make it less of a marketing lie.

TDP is based on the base frequency, which decreases on those models as the core count increases. May not explain it fully but probably contributes to constant TDP.

AMD measures TDP at boost, IIRC.

Frequency differences can explain a (small) part of it, but I think a larger part comes down to market positioning and signaling vs competing Intel chips. The TDP is sort of an indicator of the overall "beefiness" of the chip.

Edit: I understand some would disagree with what I said, but I can't see what else would explain why the 3800X with one CPU chiplet at 3.9GHz base freq would be the same 105W as the 3900X with two chiplets at 3.8GHz. And by "beefiness" I mean the overall performance potential, including possible overclocking headroom.

I believe that's correct. There's an 8 core with 65w TDP which has a confirmed single chiplet; my guess is the 8 core with 105w TDP is two chiplets with several cores disabled.

I am wondering about that. the 105W TDP 8 core TDP though doesn't have an L3 size that would reflect being two chiplets

I wonder if since it's the same "chip" they have the same TDP listed, even though some of the cores are turned off.

What's the inter-thread latency on these things?

If you mean context switch latency, Phoronix's review has measurements. It's an order of magnitude lower than Intel's.

No. Two threads, each running on its own core. Send an int64 from one to another (and usually back)

So, inter-process communication? There's this: https://i.redd.it/2z5580uugja31.jpg

AMD does much better than intel, as long as the peer is in the same CCX.

For the case where it is not, zen2 has improved considerably relative to zen/zen+.

What's the story with the chipset fan noise? Is it a big issue on new motherboards?

I plan to build a new gaming machine on AMD. Civ6 with a huge world might even work well. Fingers crossed :)


This comment breaks the site guidelines, which include:

"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."

Nationalistic flamewar is particlarly unwelcome here.


GordonS 11 days ago [flagged]

> I don't pretend that USA is perfect

OK, good.

> I have a bias that the company with the best technology should not be working with those that would love to destroy American values

I presume you mean China? China would "love to destroy American values" then?

TBH, I'm not sure if you're buying into the latest "red danger" FUD by the US gov and tabloid media, or if you work for them...

Please do not respond to flamebait with flamewar.


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact