Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen 5 3600 Review: Why Is This Amazon's Best Selling CPU? (anandtech.com)
307 points by greendave 8 months ago | hide | past | favorite | 248 comments



For several (3) years now I have had no reason to update my desktop with a i7-6700k because single core performance has remained relatively flat (in fact my particular processor has quite good single core rating still). I would have liked increased cores but the reduction in single-threaded performance wasn't worth it. Zen 2 changed all of that...

Now EVERY single Zen 2 chip is at least smidge faster than my 6700k in single threaded benchmarks AND close to 100% faster in multi-core benchmarks. So for $180 I can buy something that takes a huge meaningful shit on my (fairly nice at the time) 6700k (~$350 when I purchased it).

That's just crazy to me how much performance I can get for so little. I'm going to buy something beastly with at least 16 cores, but! I also plan on building a little cluster using mini-itx B550 boards and the 3300x. In fact, I'll probably build the little cluster first, because each 3300X is still faster than my 6700k and the total cost per system will be like $450!!one! It hasn't been since the orig Core-2-Duo days I've gotten such a meaningful upgrade for so little. Plus, when Zen 3 comes out it's a drop in upgrade.

AMD is delivering insane value to their consumers, and I love it. I just wish I could buy a Zen 2 chip in a laptop that doesn't look like it was made for a fourteen year old (no offense to any fourteen year olds). I heard someone say the lack of 4k and more professional style laptops could be Intel back-channel fuckery, but... there's also a chance no-one expected AMD, in a single fucking generation of CPUs, to sweep every single market.


>> I just wish I could buy a Zen 2 chip in a laptop that doesn't look like it was made for a fourteen year old (no offense to any fourteen year olds)

I just wish I could buy a decent motherboard that doesn't look like a 14yo's first attempt at drawing an f-35. My current rig does this weird glowing thing at night when it is supposed to be off. For my next machine I would honestly pay more to NOT have RGB support.


Check Asrock Rack series motherboards, like https://www.asrockrack.com/general/productdetail.asp?Model=X... (am4) or https://www.asrockrack.com/general/productdetail.asp?Model=X... (tr4). It is minus the RGB and plus IPMI.


Those are server boards. They lack some basic 'desktop' features like AIO control. I also need more than a pair of USB connections.

I did get a laugh out of the integrated graphics, probably the lowest specs on the market: "DDR4 16MB" - that's not a typo. I'm a little interested in how that is accomplished. Can one allocated a 16MB chunk of DDR4?


Server or workstation; AIO falls gently into RGB gamer territory.

The integrated graphics is for BMC, not for the console.


You can, but it's surprisingly hard to do so. There is the Asus Pro WS X570-ACE for current AMD consumer parts, but try to find a sTRX4 one for your 4000$ cpu that doesn't look like it's going to transform into a giant robot.


Exactly this.

I had to use an ASUS motherboard and it took a good 10-15 minutes of poking through menus (with terrible keyboard navigation) to find the "magic rgb off" incantation.

If you happen to have an ASUS Prime Z370-A motherboard, here is the what you have to do:

boot and quickly press F2 or DEL to get into the BIOS (I had to do this more than once to poise my finger over the right key)

Press F7 to enter Advanced mode

Choose the Advanced menu (4th menu across top)

-> Onboard Devices Configuration (8th sub-menu in list)

-> RGB LED Lighting configuration (2/3 way down the page)

You have to disable both:

- when system is in working state - when system is in sleep, hibernate or soft off states

I basically tried all obvious places, then all the menus before this one before I stumbled upon it. it was nuts.


I deliberately left the RGB header unplugged on my CPU cooler for this reason.


I agree it's annoying, but you _can_ turn it off in the bios.


I like my personal desktop builds to be at least vaguely aesthetically pleasing, but I agree that RGB lights everywhere is not the way to do it. Just give me a nice black and white color scheme and maaaybe a couple of easily disabled lights here and there.


I like mine to be invisible, silent, and out of the way. I mounted mine under my desk and as far back as it would go.


Totally understand that viewpoint. The computer I have right now is the first one I ever built, so I think I'm still riding the high of "wow I built this" and want to see it. I imagine my future builds will be more and more shove it in the corner.


Or a case without a side panel? My motherboard glows, but I couldn't care if I tried.


They don't dissipate much heat, yet almost every current AMD motherboard (except one at $400) has one of those annoying tiny high rpm fans that tend to die well before anything else on the motherboard.


> i7-6700k because single core performance has remained relatively flat

Well, it hasn't improved tremendously, but it hasn't remained flat. You can get around 20% more in single core performance, if it's so important to you.

https://www.cpubenchmark.net/singleThread.html

https://browser.geekbench.com/processor-benchmarks

https://www.cpu-monkey.com/en/cpu_benchmark-cinebench_r15_si...


20% over 3 years—ouch.


Probably more with meltdown mitigations on.


Lack of "acceptable high end" Zen 2 laptops is my gripe too. From what I understand, it is following 2 reasons;

1. Intel/nVidia Contracts have OEMs their hands tied (which explains capped GPU in most Zen 2 laptops).

2. Lack of widespread Thunderbolt 3 support on AMD.

While I'm fully bought on reason 1., reason 2. is still hard to digest as one can still ship laptop with USB-C port supporting PD and DisplayPort (w/ alternate mode) and call it a day, and most users won't mind. I hope Zen 3 causes the power shift on laptops.


I also noticed that the current Ryzen mobile 4000 series tops out at 32GB of memory supported, which sadly excludes them from even possibly being in the 2020 16” MBP since Apple’s already been selling 64GB systems. :/


All desktop Ryzen CPUs as well as mobos with 4 DDR slots support 128GB of memory (and there are reports that dmidecode shows the support of 256GB on some) but sadly there is no info on that for the laptops.


Have been thinking strongly about the ROG Zephyrus 14 (cheaper and better than the 13" MBP in almost every way). But don't think I can stomach not having the webcam built-in (do way too many Zoom calls).


Lack of Thunderbolt is caused solely by Intel not sharing it.


I upgraded to a 3600X. I use an 8th gen 6 core chip at work. This 3600X murders the Intel daily. The speed is insane. At work I run off an SSD with Optane acceleration, my home machines with the 3600X and NVME just kills my work machine, which itself is no slouch. I've never been so impressed with a brand new machine, and I've been building them since 1990.


I don't know what's wrong with me, but I want to atleast buy a 1400€ 24core 3960x Threadripper, because of the insane upgradability.

The idea that in 2-3 years i could just buy a used 3970 or even a 3990 to more than double the core count is amazing to me, and knowing that prohibits me from having my mind blown by the insane value of the smaller Ryzens.


Pro tip: sell your used 6700k on ebay. Somewhere, someone can put it to good use, and judging by the latest closed deals on this model, you can easily sell it for something like $225.


> I just wish I could buy a Zen 2 chip in a laptop that doesn't look like it was made for a fourteen year old (no offense to any fourteen year olds). I heard someone say the lack of 4k and more professional style laptops could be Intel back-channel fuckery,

Lenovo just released a couple ThinkPads peered by AMD Ryzen 7 Pro CPUs: T495, T495s and X395. They don’t have 4K screens though.

https://arstechnica.com/gadgets/2019/05/lenovo-adds-amd-ryze...


The T495 / X395 series is based on Zen+ (Ryzen 3000 mobile) chips and not Zen 2 (Ryzen 4000 mobile) - I know the naming is confusing given that the 3000 series desktop chips are Zen 2. However, Lenovo has announced ThinkPads based on Zen 2, such as models in the rebranded T14 lineup, but they have not been released yet.

https://www.anandtech.com/show/15772/new-lenovo-thinkpad-ran...


4K screens on laptops don't make sense to me. They are too small to get maximum value out of. 1440p is more than enough.


Hmm. You might want to get an eye test. Not being snarky: when I was 45 or so, having had "perfect" eyesight, someone suggested I get mine tested. Turns out most people loose the ability to focus at short distances with age, but the brain doesn't clue you in. Not being able to tell the difference between 1440 and 4k would be consistent with this. For me even at 13" screen size 4k is very obviously better for coding.


With a 55" screen you'd have to be within 3.5 feet to be able to see a difference with 4k. So no eye test needed. Your brain is either lying to you or the 4k screen you tested with is better than the non-4k you used.

http://carltonbale.com/does-4k-resolution-matter/


There are many things incorrect about that blog's approach. As others have pointed out, this isn't like HD Audio, where people literally cannot tell the difference between normal CD quality and HD Audio in any realistic test. I can absolutely see the difference in a 4K screen, and I can tell if even a smallish laptop screen is 4K or not from much higher distances than normal usage. I remember the first time I saw an Apple Retina display on a tiny laptop from across the room and said "holy shit!" out loud and walked over because I could see how sharp it was from meters away.

First of all, 20/20 vision is the average, not the best. Many people have substantially better than 20/20 vision. I remember laughing that I could read the super fine print "copyright notice" at the bottom of the eye test that is about 1/3rd the size of the smallest font in the test itself.

Secondly, the eye is complex, and has surprising capabilities that defy simple tests, which are designed as a medical diagnostic tool, not as a test of ultimate resolving capability. For example Vernier Acuity (1) means that much higher resolution printing (and screens) are required than one might think based on naive models.

1) https://en.wikipedia.org/wiki/Vernier_acuity


Actually, I don't have sources on hand, but I believe average human sight is better than 20/20. It's just that 20/20 was decided on as a standard for "good enough". I believe stemming from a military standard set long ago.


I don't know what the methodology of that site is, but it's certainly flawed when it comes to computer monitors. Based on their calculator, I shouldn't notice the difference between 1080p and 2160p when sitting 2 feet from my 16" laptop monitor, but the difference is night and day. I don't want to get into a philosophical debate, but if I can see the difference that your equation says I shouldn't be able to see, the equation is wrong, not reality.


This reminds me of the pervasive "your eyes can only see 24fps" myth, I guess people crave evidence that what they have is "good enough" and others are just being elitist?


Well, you will definitely see more than 24 fps, but that might or might not translate to better experience. If you want the cinematic effect for a movie, it will be 24 fps, otherwise you will get the soap opera effect.

For other uses, like fluid animations or games, you want as high as possible.


I wonder if the "Cinematic" look at ~24fps seeeming less tacky than the "Soap Opera" look at ~60fps has just been trained into us via familiarity though.

If we lived in an alternate universe where cinema was all 60fps and soap operas were 24 would we think that 24fps looked tacky instead?

On the other hand I think there's definitely some objective effects in play here too - CGI is a lot easier to make convincing at a lower framerate and added motion blur.


Peter Jackson thinks so. He pushed for 60fps in his movies even though people complained. His theory is that once people acclimate, they will get a better experience.


4k is more than resolution too, has more colors that can't be shown in regular RGB formats.


You're conflating two separate things - HDR is the specification for > 8 bit dynamic range, 4k just specifies 3840 x 2160 resolution. You can have displays that are HDR but not 4k, and vice versa.


I absolutely benefit from a 4k screen even in a small form factor. 768p is "enough" in the sense that we all got stuff done on such screens for many years, but the increase in text rendering quality with higher PPI screens is tremendously worth it to me for the reduced eyestrain. 4k is still noticeably better than 1440p. I wouldn't be surprised if 8k is noticeably better still (although with swiftly diminishing returns of course).


The reverse is also kinda true... Many people when they say "I like high resolution" mean "I like to fit lots of stuff on the screen at once".

If you're in the latter crowd, you can configure X or Wayland to render to a 4k screen buffer, and then downscale to fit the actual screen. Yes, downscaling no longer means 1 pixel=1 pixel, which introduces some blur, but unless you're a 20/20 vision kind of person, I doubt you'd be able to tell without your nose touching the screen...


But what are you going to do that your 6700K couldn’t do already? It’s only slightly faster in aI by le threaded. More cores, great, if you’re rendering video or 3D or compiling all day.


You're a welcome voice of reason in this thread :)


I too was rocking the Skylake series (6600K for me though). However, I upgraded for two reasons - I went to the 3700X for twice as many cores, and the 6th gen Intel series can’t keep up with h265 like 7th gen plus.


It is absolutely on top of the Performance/$ chart on PassMark: https://www.cpubenchmark.net/cpu_value_available.html

It offers exceptional value. Pair that with a GeForce GTX 1650 SUPER which is at the top of Performance/$ chart for GPUs: https://www.videocardbenchmark.net/gpu_value.html

And you have a champion of a workstation right there.


And this is hardware you can order today and have running under your desk by the end of the week.

This is such a welcome change from the crypto days with GPUs in short supplies and at very high prices, and stagnant Intel CPU performance!


I’ve been trying to upgrade my desktop since the motherboard is dying and I’ve found that small factor pc (SFFPC) components are rarer and at even more of a premium than usual. Most commenters in various forums chalk it up to the plague, so now I’m hoping once the next generation of CPU, motherboard, and GPUs come out things won’t be as scarce.


Interesting you mention this, because I noticed a surprising surge out of nowhere among my friends, who suddenly all decided to build a SFPC. Despite none of them ever being interested in SFPC and all of them running mid/full-towers prior.

None of them are first-time PC builders tho, so it might have something to do with lockdown-induced boredom, and my friends deciding to challenge themselves to build a SFPC. As it sounds like a major pain and challenge compared to even a mid-tower (according to those of them who have already finished).


I won't be building a new computer for a few more years, as I currently can't justify the cost of replacing my build, but when I do it'll almost certainly be a SFF build.

I've seen a number of people who have built tiny custom cases, and the challenge of both building something like that and getting everything to fit in it is super enticing. Definitely seems like the challenge and cool factor of fitting tons of power into something tiny is making a lot of people consider such builds over traditional mid or full tower machines.


For me it was because I’m moving soon and wanted to have a smaller computer to move. The shortages put a stop to that though.


> I’m hoping once the next generation of CPU, motherboard, and GPUs come out things won’t be as scarce

New desktop Intel 10th Gen CPU are actually coming out pretty soon. It's got a new socket (LGA 1200), so new motherboards are coming out for it as well.

It's only marginally faster on single thread benchmarks than the Ryzen 3000s (ie Zen 2) chips, and consumes a lot energy.

Also, Intel is still using their circa-2104/ancient 14 nm process for these desktop 10th Gen chips, which is quite disappointing. No Ice Lake 10 nm desktop chips yet.


I was most likely going to stick with and AMD CPU, the intel power consumption and price per performance is a bit high for me. I’m mostly developing javascript and gaming so ultra high cpu performance isn’t as necessary.


> javascript and gaming

Both of which are heavily impacted by single-thread performance. So getting a high single-thread perf CPIU would be a good thing to do.


The whole crypto thing is such a fascinating experiment of human behavior and side effects and so on.


Glad I bought mine before that craze. Has that opened a cheap used GPU market or did they burn them out?


Yep, rx 570/580s are selling for great used prices on eBay. I pretty much recommend them to anyone going a budget build.


I'd definitely recommend against buying second hand GPU from crypto miners. They have been worn out running 24/7 for years.

If it's selling for $30 I don't say, but I've seen them going for a fair bit of the original price.


The thing is I've never worn out a piece of equipment in 25 years of computer enthusiasm. I've had some DOAs and infamous equipment (say Deathstar) but never failures even second hand equipment. Is that different for 24/7 max power draw or is it just YMMV (and usually will still work)?


I've seen every single piece of equipment worn out over the past decades except the CPU.

Hard disk and GPU are the worst in my experience. They have the shortest lifespan and they're the most impactful when they die. Hard disk die suddenly destroying all data, self explanatory. GPU die slowly, progressively having rendering error then crashing the system erratically.

I've taken 3 GPU to their grave between my personal computers and my family's.


To limit my spending I have a personal policy where I don't upgrade my GPU until it dies (I don't generally play the latest AAA games on my PC) and with just one outlier since 2004 I've had a new one about every 4-5 years. I've actually never had a CPU fail but I've had several HDDs die.


Nah if your HDD is dying it’s either your task really demands it OR you’re not cooling them enough.

If your GPU is dying it’s VRAM or VRM is failing under high heat. Manufacturers cut fan speed to cut fan noise until warranty claims increase substantially so they never provide sufficient cooling at stock settings.

Rackmount servers don’t sound like jet engines for no reason. Things fail if you don’t do that.


I've had hard disks fail in a PVR setup (heavy load), I've had memory fail, I've had motherboards fail, I've had power supplies fail, and I've literally typed laptop keyboards to death in ~2 years. I'd be utterly unsurprised about a GPU failure.

The best case scenario is "selling it because I'm buying a new one", the worst case scenario is "selling it because it's starting to flake out", and "selling it because the warranty ran out" is not much better.


Storage fails all the time. Cables, power supplies, UPS, monitors, keyboards, mice - I have a graveyard in a cabinet.

Also motherboards who's capacitors dry up; connectors that develop faults.

This is of course leaving out devices who's firmware get exploited and become poison (routers, USB devices with malware)


It's both.

The ymmv is always big with electronics. I.e. my second last graphics card (gtx970) died within 2 yrs. But the aging effect from running 24/7 under high load is very real as well.

The high temperatures of permanently running on full power just wears it down


Besides mechanical parts you can still see failure from weary chemical components (capacitors, easily fixed), solder joints (fix depends on the joint. BGA chips aren't easy to fix) failing due to cycles of thermal expension and retraction or water damage, and lastly electromigration (the real unfixable damage, it usually happens from overvoltage/overheat).


At worse you might need to spend 20 bucks to replace tired fans.

Mining GPUs run undervolted without powercycling. They're fine.


Eh, I mined on some GPUs for ~a year. They're all still running today in various family members' PCs. Admittedly I was mining Ethereum where the main goal was to undervolt and underclock as much as possible while maximizing the memory clock.


How did you cool them? I mean, didn’t you find stock fans way too weak especially had it been in a “quiet” PC?


All the cards I had were "aftermarket" cards with better coolers (7x dual fan GTX 1070s & Pentium G3258 running Gentoo). It wasn't silent but it really wasn't that noisy. After unvolting and underclocking the GPU core, and overclocking the memory, they ran at ~60C under constant load.

I gave three of them to family members and sold the other 4. I was absolutely upfront in my eBay listings of their history.


I picked up a refurbished Vega 56 last year for a great deal. I suspect it was a mining card. Seems to work fine.


They are fine but you will need to replace thermal compound and reflash bios. The miners usually under clock the GPU and overclock the memory.


It's remarkable how not only the first but also #3, #5, #6 positions are some Ryzen 5 ?600? CPU. The 2600X is the only that is currently lower at #12. Also remarkable how utterly absent Intel is from this chart. The i3 9100F managed to break into #2 and then a Pentium Gold G5400 at #15 but both of these are sub-$75 parts while most AMD parts are significantly more expensive likely bringing more profit to AMD. Usually price per performance goes sharply down with performance, it doesn't scale linearly but remarkably a $405 processor from AMD is in the top 10, the 3900X. Power-performance wise also AMD is the killer, https://www.cpubenchmark.net/power_performance.html the new 15W TDP AMD 5/7 4xxxU CPUs are the first four while most Intel chips at the top of the chart are in 4.5-7W territory. If you look for ordinary socketed chips, you will find the Ryzen 3900, the Intel 9900T (which is a special 35W part), the 3950X and astonishingly the EPYC 7702 at 200W beating the Intel Core i3-1005G1 which is a 15W part on the latest Intel node at 10nm ... that's just embarassing.


I mean #3 is the same CPU with a slightly higher factory clock for $20 more


Ps. The $405 is an Amazon price which is not in stock but bhphotovideo has it in stock for $410.


Some days ago I checked https://opendata.blender.org/ and noticed that the RTX 2060, RTX 2060 Super and RTX 2070 are all equal when it comes to performance in the Blender benchmark.

Also the RTX 2070 Super might be the cheapest performance card at the moment.

Charts like these can save you a lot of money. But it will depend on the usage of course.


> Charts like these can save you a lot of money. But it will depend on the usage of course.

You have to be careful though. For example, the Radeon RX 570 is right next to the GeForce GTX 1650 SUPER at the top of the chart, but the GeForce costs almost 40% more. If the GPU isn't your bottleneck you may be better off with the less expensive one and save money to buy a faster CPU or more memory. Whereas if it is, you may be better off spending more for a faster GPU than either of those.

What these charts are great for is to look at the top fifteen or so as the set of candidates, which will all have good performance/$, and then if you prioritize better performance look at the fastest ones and if not then look at the least expensive ones.

But even then you have to be careful. For example, on the CPU chart the Core i3-9100F is one of the best values. The Ryzen 3 3200G is only slightly faster for $17 more (i.e. almost 25% more). But the 3200G has an iGPU, which is worth a lot more than $17 if it means you don't have to buy a discrete GPU.


this is why I favor looking at the XY scatter charts rather than a made up heuristic like G3D/$. for instance those 2 gpus in you post will almost lie on a straight line through 0 without anything else on lower slope side. but we dont actually have to get the absolute lowest perf/$ & looking at the scatter chart we can bring in those other factors into play too.


In addition the most cost efficient alternative may not be good enough for your particular use-case and expectations, and then it wasn't a saving at all.


Also consider power consumption. Polaris RADEON is cheaper but power hungry compared to Turing GeForce.


The power consumption is only significantly different under load. If you're in the corner case where Polaris Radeon is just fast enough even though you're constantly pushing its limits then this matters, but generally if you're running it at full load often enough for power consumption to be meaningful then you should be looking at the faster GPUs to begin with.


Yes, 1080p at 60fps which is still the most popular resolution out there according to the Steam Hardware Survey.

https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...


I wonder why the GTX 1080 isn't in the videobenchmark all-time value chart.


Because their chart is determined by benchmark score / current sale price. The 1080 wasn't heavily discounted when the 2xxx series was launched, and high end GPUs are rarely value for money winners anyway as cost scales more than linearly with performance. Double the $ might get you 25% more frames.

For context, their page on the 1080 (https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+1...) gives it a score / $ ratio of 20, while the lowest item on that page has a score of 30.

You could argue they should have some minimum threshold for performance before including them on the chart, certainly no one should be considering a r7 260 or gtx 770 for their new system, however cheap they've gotten, but they don't.


probably has more to do with their price scraping system than it does with the card's performance


I'm surprised the 1650 super is ~ 30% cheaper than the 1660 super but it looks like current prices pretty accurately reflect original MSRPs

* https://www.tomshardware.com/reviews/evga-nvidia-geforce-gtx...

vs.

* https://www.tomshardware.com/reviews/nvidia-gtx_1650-super-t...


Does anyone know if the GeForce GTX 1650 SUPER, combined with a 3600x, is able to power a 4k monitor at 60hz?

Note: this is NOT for gaming. Just everyday workflow and dev environments in 4k, 60hz.


If you go on NVIDIA's website you can find the specifications of the card where it states the maximum resolution and refresh rate is as follows:

7680x4320@120Hz [1]

[1] https://www.nvidia.com/en-us/geforce/graphics-cards/gtx-1650...


You don't even need a discrete GPU for that.


Without a doubt. For comparison a 13” MacBook Pro can drive two 4K displays plus the internal display at 2880x1800, all at 60Hz, from an integrated GPU.

You can probably run as many 4K monitors as the 1650 has video ports.


Yes, easily. Laptop 1050s could render your desktop in 4k60.


I've had a 2600 for a year or two. I was looking to see what I'd gain from an upgrade to the 3600..Interesting to see the new 2600s still compete in the value department...now that their prices has been reduced by about $50.


When I was building a workstation in late 2018, it was a close match for price to performance, even after getting a 50% discount on Intel HDET processors from a friend! I'm happy to see AMD killing it in the CPU space.


With the proviso that the perf/$ approach works less well on GPUs cause you need to match that with what type of screen res & refresh you want to drive (if gaming I guess).

..which might easily land you in 2060/2070 territory.


Wow, looking at the XY scatter graph there shows a pretty stark contrast.


It's neat to see that the 3900X is on the zen 2 threadripper price/performance line, but is available at mortal consumer prices and on the low end platform (B450). The numbering scheme makes sense in the context of this plot, even though the 3900X is not threadripper while all other 39X0X parts are.


The 3950X is also an AM4 part (non-threadripper).


Interesting. AMD dominates the entire top 10 of that list.


Indeed! I knew that AMD was thoroughly on top, but that list really shows by how much. The first Intel in that list which I'd consider for a mid-range workstation (the i5-9600FK) is way down at #22.


i wish there was also a column for Watts/CPU Mark, this matters to me and i'm sure AMD kills it here, too.


It matters to me too.

Sorta here: https://www.cpubenchmark.net/power_performance.html

Unfortunately it's hard to tell which are embedded versus traditional sockets.


And limited to x86.


I've heard that Intels idle at lower power. Ryzens do not go below 18W, but Intels, even ancient Sandy Bridges idle at 5W. Not sure if it is true though.


That's roughly equivalent to Cinebench points per dollar, which is unrelated to gaming performance, if you want that. Doesn't mean the 3600 is bad, but it does mean that these kinds of synthetic benchmarks tell you very little about gaming performance, which is less about the speed of the cores themselves but rather memory latency, something all current Ryzen CPUs are architecturally very bad at (20-50 % slower than Intel).


I was just reading through the Tom's Hardware review of the Ryzen 3600 and I can't see what you're talking about. The gaming performance is within 3% or so of the i5-9600K in basically every game. Granted, these tests were conducted at only 1920x1080, but still, if there were such a dramatic performance difference, it should be noticeable at lower resolutions too.

https://www.tomshardware.com/reviews/amd-ryzen-5-3600-review...


Native gaming isn't particularly cpu-bound these days. I wish I could take the overall performance of a 3700 and divide it between four cores instead of eight. IPC is king when it comes to game console emulation.


That same review has an overclocked i5-9600K vs a stock one and the performance difference is pretty large. An i5-9600K @ 5.0 is almost as significant of an improvement over a stock one as a stock i5-9600K is over the lowest performing CPU of the bunch.


What percentage of 9600Ks can hit that OC and keep stable under load?


~90 %


Bulldozer series were that - 4,6,8 normal cores which could have been glued in pairs to make double wide cores (this also produced a metric ton of comments and articles how FX series "weren't really 8 (6,4) cores"). CPUs were good enough but apparently this architecture was hard to utilize fully (and had other issues, e.g. with IPC).


Memory latency is 20-50 % worse, but that doesn't mean 20-50 % lower FPS. What it does mean is in CPU limited scenarios Ryzen CPUs currently cannot touch Intel's CPUs. Even several year old high end Intel parts outperform Ryzens there, even if the Ryzen part in question has better single threaded throughput (i.e. Cinebench).

At the 3600 price point this is largely irrelevant though, since you will be GPU limited by way of budget anyway.


A 3600 may have much worse memory latency, but it's also got more than twice the cache of a 9900k. At least according to the linus tech tips review, the 3700x beats the 9900k in CSGO, a horrendously single-threaded, cpu limited game.


I don't think CSGO provides much insight, considering that its engine was already outdated for many years on release (2012) and has received few significant updates since. The 3rd party benchmark map that is typically used for these tests does not reflect actual gameplay well. You'll notice that in the benchmark the game is running quite smoothly, meanwhile in actual games CSGO has been plagued by horrid microstuttering and poor framepacing for years (largely caused by the ever-increasing amount of skins and swappable models).

Games like SOTTR provide much better insight into the performance characteristics of modern engines instead of 2004 tech, both in CPU and GPU limited scenarios (depending on settings).


You specifically said:

> What it does mean is in CPU limited scenarios Ryzen CPUs currently cannot touch Intel's CPUs.

CSGO, regardless of how old the engine is, is very much a CPU limited title, and not only do Ryzen CPUs "touch" Intel in CSGO, they beat it.

In SOTTR the 3600 is about on par with a 9600k: https://www.gamersnexus.net/hwreviews/3489-amd-ryzen-5-3600-...


This is true, I’d say the build GP describes would probably be good for someone who is short on cash and wants to build a machine that is OK/Good for gaming and Good for things like programming or photo/audio/video editing. It’s true that that build won’t knock anyone’s socks off and isn’t optimized for anything in particular, but the fact that you can make a build like that with such good parts at that price is a net positive for consumers.


Weird thing about benchmark is that Cinebench score is treated as most important metric. It's very informative score for CG rendering use cases but not for all use cases like gaming, video editing, etc. I suspect that vendors/benchmarkers use it because of Cinebench multithread is scales very well by increasing threads (so easier to advertise higher core count model).


It is easily the best bang for buck CPU on the market, that allows pretty much 99% of workflows that require a PC. Gaming? Check. Programming? Check. Graphics design? Check. Video editing: also check (but on the lower end, true that).

It has also some really fantastic thermals, it does not produce that much heat. I use it in a fully fanless / passive cooling setup and it works great.


I just built a new machine around this CPU back in March too. I built a new machine specifically for playing Half-Life: Alyx, and couldn't be happier with this CPU.

My thermals were pretty great with the stock CPU cooler, but I ended up going with a Noctua NH-U14S which might be a little overkill, but when playing Half-Life: Alyx, I stay between 45-60c which seems pretty good to me. The game runs at a perfect framerate on High settings with the AMD 5700XT, 32GB of DDR4 RAM on the Vive Pro.

I may end up upgrading to a Ryzen 7 4XXX series when they come out, but for now, this CPU is performing great. Def best bang for your buck.


> but I ended up going with a Noctua NH-U14S which might be a little overkill

Yes. I have the same cooler* on a Threadripper 3970X with 280W TDP. The only thing that it can't handle is a sustained all-core load for >10-15 minutes. Even at all-core load it still turbos above base clock (but less than max turbo).

Technically a different SKU with a larger base plate, but otherwise the same design.


I built a new PC around one just this weekend. Turned out I'd forgotten to plug in the CPU cooler fan and it ran great. Big cooler though


I built a new system with one of these last December, and it has been a great experience for everything I have thrown at it. I wasn't sure whether I'd stick with it or upgrade later - I mainly got it just to "get into" the Ryzen ecosystem, as this article suggests many do - but so far I see absolutely no reason for anything more.

One thing I didn't realize at the time is that the 3000 series is "end of the line" for B450 motherboards, which should be a real consideration for anybody eyeing a new system with this CPU now. As this article suggests it leaves you in an awkward spot. I opted to upgrade there to a X570, but only barely; a lucky decision, as it turns out!


> It has also some really fantastic thermals

Granted my case doesn't have the best airflow (NZXT h210i), I was getting pretty high CPU temps with the stock cooler. Averaging mid to uppers 80s during gaming, High 50s idle.

I went ahead and ordered a Noctua cooler which should be coming today actually. Hoping it helps.


The 3600 stock cooler is GARBAGE. Any load at all and it would eventually climb up to 95 degrees. I tried re-seating it, using different thermal compound, just blowing more air at my computer, but nothing fixes the fact that there's so little material in the cooler, and it's just not that functional. It can handle small but bursty loads, but any constant one easily overcomes it's pitiful performance.

That said, $30 aftermarket cooler will keep you at 50 degrees even during max load. Great chip for the money and I'm super happy with it


I just got one myself and in my system, it seems like a fairly hot puppy. I see it idle around 50-60C and load at 70C or more. This was with an aftermarket cooler, although I may not have applied the thermal paste well. I haven't done any thorough testing but it seems hotter than the rather old i5 that it replaced.


IntelliJ stuff is usually slow (I like smaller form factor machines). Finally with the 3600 performance is at least reasonable and not annoying.

It took forever to get bundled, but you can get Lenovo and I think HP prebuilt machines for this as well - for work I don't like handbuilding a machine.


I don't have first hand experience with higher spec build on the single CCD Zen 2 chips, but the dual CCD chips are not particularly good in terms of thermals. Only the very best of air coolers can handle them, and are not particularly quiet doing so. Idle CPU power consumption is unimpressive and at 50-70 W exceeds what older systems achieved for the entire system.


I had more problems with the 3700X although on paper it is 65W TDP (same as the 3600). In practice it gets much hotter and draws more power. The 3600 never gave me any problems.


Which case (and GPU) did you go with?


Streacom DB4, one of the most beautiful passive cases around. It matches Apple-level design, it's stunning.

As GPU I had the Gigabyte GTX 1650 mini and the Gigabyte GTX 1660 TI mini. The 1650 is definitely easier to put in a passively cooled system, but the 1660 TI is much more powerful.


Price for performance. It plays the latest games at high resolution without being a CPU bottleneck. In gaming benchmarks the 3600 is usually within a few fps of the 3800x. https://youtu.be/9OXbhgnHvXQ


It's more than price for performance. It's also a very high absolute performance. Especially in games, where something like a Threadripper processor is not faster.


The 3700X is very slightly lower on performance per $ but it does have 8 cores and 16 threads so if you're going to be compiling code it's totally worth it.

The 3900x has been selling for $400 which is also amazing value.

Intel CPU market is behind after many years. Long live AMD design + Taiwan manufacturing.


> Intel CPU market is behind after many years. Long live AMD design + Taiwan manufacturing.

Except Intel still delivers better and more consistent gaming performance largely based on better overall architecture (L3, IMC and what connects these).

Both Intel and AMD are billion-dollar publicly traded companies. I don't get why people would fanboy either of them. I don't want AMD to win, because then they are going to abuse their position, just like Intel did. Conversely, I don't want Intel to win. Two players in a market is already problematic; trying for one player is just suicidal.


I don't think most people want AMD to win completely, as no one expects Intel to ever fold. Also historically, Intel has been much scummier, and it's paid off for them. When AMD was previously top dog (most recently with Athlon64), I don't remember them being as anticompetitive as Intel has been.

Intel has 5x the market cap and 10x the revenue compared to AMD, and AMD is the underdog in its dGPU competition with nVidia too! If you want innovation, you'd want to equalize investment in R&D for both these two competing pairs, it makes sense to be pro-AMD.


I don't think anyone actually wants AMD to "win", regardless of how they express themselves. What they want is AMD to have a lead on Intel, since it's been so long since that's been the case, and we'll all benefit from Intel not being continuously in a hefty lead, like it has been for decades.


AMD is so far behind intel in market share, money, resources overall that worrying about "you don't want AMD to win (either)" is looking too far ahead.


I'm still thinking about my cheap Ryzen-based PC only as a stop-gap solution, until the "real deal", aka Intel 7nm CPUs, premieres in 2021 (fingers crossed) .


Why is the "real deal" not the 7nm+ AMD CPUs coming out this year, or the 5nm AMD CPUs likely coming out next year?


Because AMD is a company with a long history of making crappy processors, that only recently created a decent one, by lucky coincidence. Intel is exactly the opposite.

Even now Intel architecture is still superior in some areas, the biggest downside of it being the fact that they are stuck on the outdated 14nm++++ process.

Once the Intel's 7nm comes out there will be no reason to buy AMD, except for the price (I expect Intel's 7nm process to be roughly equivalent, or even superior to TSCM's 5nm)


>that only recently created a decent one, by lucky coincidence.

They have been repeating that "lucky coincidence" for the past 3 generations of Ryzen (ofc you can nitpick that one of them wasn't a true gen, but more of a gen 1.5, but whatever).

3 extremely successful generations in a row seems a bit more than just a "lucky coincidence" to me.


An alternative point of view is that AMD took the long view and invested heavily in future gains at the expense of short-term gains, allowing the market to be even less competitive for Intel during that time.

Perhaps their current situation isn't as much a "lucky coincidence" as the payoff for a very successful long-term strategy?


>Because AMD is a company with a long history of making crappy processors

Tell that to Microsoft and Sony.

Like it or not the Xbox and PlayStation shows what the future will bring for games on PC which is exactly the opposite of what Intel tries to sell.


You must not remember the Athlon 64 era.


Ended up with a 3700x. They seem to run at the very edge of their thermals straight out of the box though.

I tried a light touch of overclocking and got absolutely nowhere. (Stock cooler)

Running that close to the edge is an achievement in itself though I guess


I can confirm this. I'm running a 3700x with the stock cooler and It's idling at 50-60 degree at 3.6 Ghz. If I enable Game boost mode in the motherboard (it basically push the frequency to 4.2 and turn up all fans) it increase the performance of the CPU in cinebench from 4662 to 5087 points. That's said, the CPU reaches a whopping 100 degree!!! Probably need to get better cooler before enabling that again.


I see this as a good thing, the chip is tuned well enough to hit its limits out of the box, without having to mess around with manual overclocking to get optimal performance.


Yeah, and this is advertised in the 700X moniker. If you want an overclockable chip you need something from the *600 series.


This might be slightly wrong: I think the X is the key indicator, not the 600/700 distinction.


>so if you're going to be compiling code it's totally worth it.

This always gets brought up as a use case, but how many people are building non-trivial codebases where compile times matter? If build times are under 5s, near perfect parallelization only nets you a 1.25s gain between 3700x and 3600.


Most web developers. Webpack + typescript compiling and bundling all the assets.

You want to see a preview as quickly as possible. Best if you can keep a live view on a second monitor.


I doubt much of this is parallelised. Would love to be wrong though!


Cannot speak for everyone, but the builds of my team are heavily parallelized, and we aren't even running a monorepo. For any non-trivial project that has its own somewhat complex build pipeline with many internal dependencies and such, it is pretty much a given.

The only scenario I can foresee where people wouldn't be doing parallel builds is if they are just working on their personal side project with no other people involved, where the need for parallel builds is non-existent either way. You don't want to have one giant monolith project for your whole complex web app, it becomes a nightmare very quickly.


is any of that compute-bound? I would imagine that it's mostly I/O-bound.


My CPU definitely spikes when compiling JS.


Any decently sized C++ or Swift code base is going to have compile times way over 5s. Many just moderately sized projects have build times over 10 minutes.


My current codebase is 28 hours for a full build on one machine...

I dread having to update the compiler or some core "used nearly everywhere" header file, and wait a whole another 28 hours...


This is for a full rebuild, right? If you're changing a few files it shouldn't have to recompile + relink everything.


For moderate to large projects, link time can be a huge factor even for incremental builds. It might be that faster IO is the better investment, though.


Bought one for a new PC back in January. The value is great and I haven't noticed a single bottleneck. I don't do anything too intensive, and the games I play haven't stressed it that hard. The best part of the upgrade has been Rust compile times. Coming from an older ThinkPad, it's a huge change.


AMD is making good CPUs, but i had an issue with Ryzen 5 1600 that left me with a bad experience with AMD. It's was my first ever build and i used this CPU, all good, but from time to time the Linux Desktop i use will freeze, hard reboot is the only fix. I noticed that this happen when i left the PC idle and the screen turn off, i tried everything and just accepted that this is a bug in KDE/Ubuntu/Kernel/Motherboard/RAM or GPU, not in my wildest thinking thought it would be the CPU.

Turns out it was the CPU and how crappy AMD handle C-states that save energy, i had to turn them completely off and that resolved the issue (In AMD defense, freezing the entire OS do save a lot of energy)

This was exactly 2 years ago, and i had the issue for almost a year before figuring out the fix. I wonder if the new generation have this problem too?


> Turns out it was the CPU and how crappy AMD handle C-states that save energy

From my experience, Intel is no better.

I have a 4 year old laptop with an Intel CPU that has the same problem, it was never fixed by Intel. [see EDIT]

For the first 3 years of use I just accepted the fact that my laptop would randomly freeze and I would have to reboot it.

One day, I got sick of it and started digging into forums until I found a way to avoid the specific c-state (by modifying kernel boot parameters) that caused the issue.

EDIT: Apparently, it's getting fixed now. Just 5 years late, issue was reported in 2015 https://bugzilla.kernel.org/show_bug.cgi?id=109051


I’ve just been through this exact thing with a 3200G on Ubuntu 19 and 20. Took months to disgnose. Extremely painful. Ended up replacing the RAM and motherboard before finally determining the problem was with the CPU. Disabling C-states helped but didn’t resolve the issue. Tried a bunch of other things too, none of which ultimately worked.

In the end I switched to a 3700X; so far the problem hasn’t reoccurred. No doubt it will now that I’ve posted about it (fool me twice, shame on me).


Worth pointing out that 3xxxG parts are Zen+ and not Zen 2 like all other Ryzen 3000 parts. So the CPU in the 3200G is very different from the one in a 3700X.


I stopped using Ubuntu 10 years ago, when I had a weird issue. Out of nowhere computer locked up and required a hard reset. It took a while when I figured out that issue originated with Intel WiFi card. I eventually found the bug, it turned out that kernel crashed whenever the adapter sensed an 802.11n packet (the protocol was very new at the time so it wasn't everywhere yet). Ubuntu still decided to go ahead with the release, yet hold off on including freshly released major version of Open Office (I think version 3), because of stability. This was extremely frustrating, because there was no fix and no option to rollback to older version. So that was when I dropped this POS and used a different distro. I'm wondering if that's again Ubuntu issue and not CPU.


Yeah. Who knows. It’s just an anecdote, really. But one that hopefully saves someone months of tedium.


There were more issues, but that one was what pushed me over the edge.

I constantly had problems when attaching/deataching external monitor to my laptop, once in a while it wouldn't switch, or various parts of desktop (like task bar etc) would move around.

When they introduced PulseAudio, it wasn't ready at the time, so there were a lot of audio problems back then.

Overall the experience with Ubuntu was that every new release was taking some old bugs away and replacing them with new ones.

I hope things improved since then.


I've been running a 3400G since last fall on Debian with a custom kernel to get the latest video drivers. It's been stable until a few freeze ups in the past week. I'd pin the blame on Ubuntu.


Surprisingly, the solution for the first gen Ryzens / 1600AFs is not allowing the power supply idle voltage to go into low power mode. Check your BIOS for it.

It turns out many PSUs don't handle low CPU power states well, and disabling C6 just prevents the CPU from being able to idle (and consequently, boost) properly. I've had perfect uptime with my 1600AF in a storage node since discovering this.


All my computers were Intel based except for one. It was an Athlon64 X2 from ~2006 and I also felt something was wrong when running Linux. Random crashes happened quite often, I never got to know the cause. Performance was not amazing either, despite my previous computer being 4 years older.

I've never had a problem running Intel based systems with Linux. So, even though everybody's talking about how good AMD is right now, and why you should buy one for your next rig, and how good are AMD graphic cards with open drivers instead of the nVidia binary blobs... I sincerely don't see any reason to ditch what has been working OK and make the switch when preparing a new build. Should I?


Yes, you should. AMD is significantly different than it was almost 14 years ago. They wipe the floor with Intel that's still using their years old 14 nm process. AMD simultaneously has higher performance, more cores, lower energy usage and lower cores. It is categorically better in all fields.


Yes, for the technical reasons alone, I'd be switching to AMD. And I guess a lot has changed in 15 years, sure, but as a Linux user, comments reporting issues like these is what worries me. Compared to the mostly hassle-free Intel experience, to me buying AMD gear sounds almost like a lottery.


* lower cost, not lower cores


As far as anecdotal evidence goes, I had a Linux desktop built around a 64-bit Athlon in 2004-2006 - as the primary, it saw plenty of use, but I don't recall any random crashes.


At Ryzen's launch mobo vendors weren't investing a lot into the platform because they were largely burned by previous AMD generations that failed to compete with Intel.

It wasn't until Ryzen 2000 / 3000 in particular that board vendors took the AMD stack seriously again.

I remember at release getting ram working on Zen 1 was a nightmare. Nowadays you can throw almost any stick to 3733mhz on Zen 2 no problem.


Yeah, early RAM support on Zen 1 sucked. It was definitely fair for vendors not to want to plow huge amounts of money in at that point, as the Bulldozer-era CPUs sucked in a profound manner. Definitely glad for those first few AGESA updates that got things like XMP fully working on my early X370 board.


My 1800X did the same thing. I was able to RMA it and get a replacement that worked fine. It's only the early chips that had this issue.


AFAIK this goes back to the AMD graphics driver; I just built my wife a Ryzen 3200G / B450 system and she has had a couple of spontaneous reboots too. I did burn in using an old AMD 6970 card instead of the built in graphics and it did fine with that for me; but of course that card wont run the games she wants so we're hoping for a fix too.


Having a TR 2970WX at work with the same issue. You must just disable the lower C-states, freezing the machine when coming up from sleep.

At home I've had a 3950x now for about six months and no such issues with the CPU at all. It just works and runs nicely. No freezes, no problems.


I have seen this when running Linux desktop on older AMD CPUs.


It's a pretty awesome chip. I bought one the moment they came out with a B450 motherboard. It sounds like I won't be upgrading to Zen 3 with it, but that's probably fine. It's more than enough for the tasks I do.

With that said, once the 3300x is out, I would likely go that route with a cheaper a320 motherboard. This would bring the cost of the cpu/motherboard to about $175 which is terrific value, and leaves you with more money for other components.


I'm in the same boat with a 3700x. I'm pretty sure the 3950x will be available at a good enough price that I jump to that before needing a motherboard upgrade.


3300x won't work on anything lower than b5xx, afaik.


Ryzen 3000-series will run on B450+ and X470+, based on current specs. Obviously, at some point there will be a future motherboard release that will change the +'s to ranges.

https://images.anandtech.com/doci/15774/Ryzen%203_B550_Press...

Motherboards don't need to have the xx. There is only B550. There is no other B5xx.


But B5xx I also meant X570. Anyway, I am almost 100% sure, it won't work on A320, as the someone in the message abovedesired to run it on. And it is unclear if there will be updates for the BIOS for cheaper B450s.


Again, https://images.anandtech.com/doci/15774/Ryzen%203_B550_Press...

A320 supports Ryzen 1000-series, 2000-series, 2000-series APUs, and 3000-series APUs.

A320 does not support Ryzen 3000-series, and future Ryzen processors.

I can keep reading the linked chart for you, if you'd like.


Some A320 boards actually do support Ryzen 3000 with a BIOS update, like the one below. I wouldn't really recommend getting them because you'll probably need to flash the BIOS, but they can be used.

https://www.msi.com/Motherboard/support/A320M-GAMING-PRO#sup...


MSI seems to have committed to different support than AMD has. Gamers Nexus has covered the AMD chipset stuff recently to great detail.[0]

There are several issues that come together to make it difficult to support all AM4 CPUs across all the AM4 motherboards. MSI seems to have committed to their AM4 motherboards supporting all AM4 CPUs, and they are having to perform some gyrations (and potentially some reverse-engineering of AMD-supplied binary blobs) to meet their claims.

[0] https://www.youtube.com/watch?v=T5X-8vZtml8&t=0s and https://www.youtube.com/watch?v=JluNkjdpxFo


Please change the attitude. A320 does not officially support the 3100 and 3300x the poster above mentioned. Please pay attention when replying, otherwise you come across as rude.


Replying with supposition, "I am almost 100% sure, it won't work on A320," to vendor-supplied guidance (linked image in parent to supposition) on exactly what is and isn't intended to be supported comes across as rude. It makes it seem like there is no desire to engage in a conversation, but rather just to speak one's thoughts without regard to the partner. If the first response to engage with a specific user is to critique them for the gall of calling out that they're repeating themselves, it seems rather a callous disregard for others.


[flagged]


Oh good, so you'll see I post on lots of hardware threads, and usually provide links to help clarify things, and my own experience where relevant. And you'll surely have gone back far enough to see my comments on Intel CPU threads focusing on the topic of the thread (Intel) when those were the common ones. I share what I know, because not so many people try to keep up with PC hardware and it can often be confusing with lots of numbers and letters. I try to remove uncertainty and leave helpful breadcrumbs for later readers.

So you should understand that the exchange that goes like this might be seen as someone ignoring facts brought to the table by a conversation partner:

Someone: incorrect statement (3000-series only on B550+)

Sometwo: correction, clarification, link to spec

Someone: supposition (I am almost sure)

Sometwo: precise specification to remove the need for supposition, link to spec again

Someone: criticism of sometwo for attitude regarding continued provision of vendor-supplied information in favor of someone's false statements and suppositions

If I'm reading this thread, for any hope of information about Ryzen compatibility, you've lied to me once, and given me a guess. I've provided links to manufacturer guidance and ruffled someone's feathers. But facts are more useful.


I bought the Ryzen 9 3900X (12-core) for my new work PC and it absolutely blows anything I've previously used out the water. I was considering a top-range i9 but for the price and performance I'm happy to have gone with the Ryzen and am now an AMD fan.


The bundled cooler with the Ryzen helps a lot as well, as those low tech hunks of metal are crazy expensive.


I have an 3900x and in gaming it easily hits 80-90C. In 100% load it gets 97C and throttles. I switched to Noctua D15 and it does not go past 70 on full loa dand 60 on gaming. Higly recommend it, its so silent.


"throttles"

Are you certain you don't mean "stops boosting" when you say that?

I'm running exactly the same: a 3900X with the default included cooler. Yeah, it hits 95C and at that point it's running at 3.8 GHz instead of 4.4 (I never see 4.6 unless I boot Linux in single user mode).

But 3.8 is the base clock speed, so I don't think "throttle" is the right word for "running at base clock."


I find the included cooler for the 3600 not so great. I bought one the month it came out, and was hitting pretty high temps when creating h264 using only CPU. I tried applying paste before buying a better cooler. I think 3600x and up have much better coolers.


Yeah, the X series processors come with the Wraith Prism which is a really nice cooler. My processor isn't overclocked but the fact that I've never seen it get above 55 degC with the prism is impressive.


>never seen it get above 55 degC with the prism is impressive.

Pretty sure I've seen 75+ on mine 3700x


For most cases on the 3600, you need a separate cooler to get longer life from the CPU. I was getting warnings when playing some games with the stock cooler. Good thing they only cost about $30.


Do modern CPUs fry themselves like the ones from 15-20 years ago? I thought they just throttled down when approaching their limits, and extra cooling is helpful for allowing CPUs to boost clock speeds for longer.


AMD seems to design their components to get all the way up to 95 and sit there. It happened in the stock cooler for the 3600, and it happens on my 5700xt, even though I went for one of the nicer cards. It's still below the safe junction temperature, and it won't fry, and it can even run just fine like that for several hours.

The downside is that there is evidence that such high temperatures increase electron migration (or something similar) in the chips themselves, leading to not infinite lifetimes. I want this computer to last 10 years, so I bought an aftermarket CPU cooler for $30 to keep temps closer to 50 degrees


Not generally in the short term, but over the longer term increased heat is going to make it more likely that the chip fails prematurely.

Of course, if you do something like not attaching the heatsink at all, it's plausible that heat could spike fast enough to cook the chip before throttling or overtemp protection can kick in/shut the system off.


Thermalright coolers are sleepers; not well known, but superb performance and price-performance ratio far superior to anything else. You can basically get something that gets close to a 100 € Noctua NH-D15 for less than half the price.


Also check Scythe


Especially in super-budget builds. The extra $30 that you save by getting a reasonably good cooler with your CPU versus needing to replace the stock Intel one can be huge for someone building in the $500 range.


I find it more interesting that AMD Ryzen chips are the top 6 best selling CPU's.


Last summer, the 3600X was my move away from Intel after 20+ years of building strictly Intel machines at home.

Here the build: https://pcpartpicker.com/b/p9D2FT


The newer AMD machines are amazing. Its the second coming of Moore's law, but for core counts instead of clock speed. I just put together a machine with a 3900x at home and it is about 2/3 the speed of the three year old Intel Xeon machine with dual processors and 28 cores I use at work. The 3900x was around $450; the processors in the Intel machine were around $8k.

I only wish the AMD desktop and workstation machines could support more RAM. I have 256GB on the Intel machine, whereas 128GB was the max for the 3900x. I think the threadripper line only goes up to 256GB, which seems a little low for machines which such a large core count.


Does AMD have any plans for an integrated GPU?

They make great stuff, but the value proposition disappears pretty quickly if you aren't building a gaming rig/workstation that already requires a dedicated GPU.



and the Ryzen 3 & 5 with a "G" label.


Yeah, apparently the 4700G[1] has already been spotted in benchmarks. Basically their 8-core ZEN2 APU.

[1] https://videocardz.com/newz/amd-ryzen-7-4700g-8-core-apu-pic...


Yes, look for CPU models suffixed with “G”. Unfortunately typically only available on the lower-tier models.


These are a full generation behind, though. 3xxxG is Zen+, 4xxxG is Zen 2. Memory compatibility is substantially worse (the Ryzen 3000 / Zen 2 memory controller is very good actually, but severely bottlenecked by Infinity Fabric) and CPU performance is quite a lot worse clock-per-clock as well.


Ryzen 3/5 have variants with integrated GPU:

https://www.pcworld.com/article/3402056/amds-new-ryzen-3000-...


Maybe slightly off-topic, but how do y'all feel about the Epyc for workstation-type workloads? I really want ECC for my next workstation, and my understanding is that Ryzen/Threadripper ECC support is very hit or miss (the common failure mode is that it will claim it's on, but not actually detect or correct memory errors). But on the other hand, these Epyc chips sure seem slow on paper compared to consumer-grade stuff. I am not sure it matters, though, and am interested in what HN's thoughts are there.


For Threadripper, all CPUs and motherboards are fully validated for ECC. (AMD requires Threadripper motherboards to implement ECC.)

It's only Ryzen where the CPUs support ECC, but AMD doesn't require AM4 socket motherboards to implement it. (And you have to verify that a board advertising ECC actually implements ECC and doesn't just accept ECC modules.)

I'm in the same situation, looking to get a new workstation and want ECC. Performance-wise, I would be good with a 3900x or 3950x. But PCIe lanes and good IOMMU separation are also requirements for me, so it looks like I have to go Threadripper. I'm disappointed that there's no 16 core Threadripper that has the other benefits of a HEDT platform for people who don't need zillions of cores. The current generation motherboards are also disappointing in that they max out at 4 PCIe slots. Apparently this is because PCIe 4.0 is still rather expensive at the moment.


> For Threadripper, all CPUs and motherboards are fully validated for ECC. (AMD requires Threadripper motherboards to implement ECC.)

I did not know this. Do you have a source for this that I can share with others for reference in the future? My understanding was that Ryzen and Threadripper were the same on this front.


AFAIK The Ryzen/Threadripper CPUs are all support ECC perfectly fine. Where things get shady are with motherboards that advertise support but don't quite do it right. Make sure you get a reputable motherboard and you should be fine.


Not so much a reputable motherboard, but one that has been validated by a reputable manufacturer to support ECC. Asus (literally the first I thought of, no preference implied) is a reputable manufacturer, but they may happily sell a low-end motherboard without ECC support and a high-end motherboard with ECC support. Both motherboards are equally reputable.


Asus has an X570 with supported ECC don't remember the exact middl model name, but it had WS in the name.

I do wish memtest86 had a rowhammer style test, as that should tell if ECC is working


> I do wish memtest86 had a rowhammer style test

Send a PR...


memtest86 (the non-open-source one) has a rowhammer test. It passes on my non-ECC machine though.

https://www.memtest86.com/tech_individual-test-descr.html


Overclock the RAM for the test?


I’ve been looking into building a pc this last week after almost 2 decades on Mac laptops. My motivation is that I’m playing around with Blender and my old MacBook is not having a fun time with it.

It looked to me like a modern AMD combo is the way to go - is that the general thinking these days? Are there issues you need to watch out for if you’re going to use them with Linux?


I built my first gaming computer two years ago and have been ravenously following the scene ever since. In my opinion, AMD is the best price for value in both single and multithreaded workloads, and becomes only slightly outclassed in single threaded workloads at the extreme high end ie 9900k+ models. AMD seems to be particularly good in high end multithreaded workloads like graphics processing.

It’s important to note most CPU benchmarks are utter trash and many are on the payrolls of intel including CPU benchmark.net (thanks google for granting them incredible SEO).

My current build is still an 8700k but my next will most likely be AMD for a mostly gaming workload.


That’s excellent info thanks. Would you mind sharing ballpark figures on how much it cost? How did you go about choosing motherboard etc? I haven’t built a pc from scratch since the late 90s, so I’m a bit lost with it these days! Back then it was a matter of what compatible components you could even source...


One thing to checkout is YouTube build videos, you can often find ~2 minute shorts of a professional installing your exact part into your exact motherboard. This really removed a lot of the anxiety and trepidation I had while installing parts (it was my first time ever). Funny enough I plugged my monitor into the motherboard directly (when you’re supposed to plug it into the GPU) and had a black screen panic until I figured it out.

As far as figuring out what parts I would recommend three strategies:

1) start at reddit.com/r/buildapc and reddit.com/r/buildapcforme and checkout some of the threads at price points you are interested in. These builds will include pcpartpicker lists which are a tool that checks compatibility and general prices for you through a web ui. These builds are not perfect but will give you a general idea for how people are commonly building for the given ~6 month cycle. They all have their own biases, some like to save costs by getting a slightly weaker CPU, some spend more on the motherboard for future proofing, some need more storage so they throw in a giant SSD etc.

2) once you get a feel for the common components you’ll want to pick out your main two items: the cpu and gpu. You’ll already know your options from step one, so this step will be the icing on the cake. As long as you get a powerful cpu and gpu you will most likely be thrilled with your build (also an SSD and decent monitor for completeness). The cpu choice will also help you select a motherboard since that is really the biggest compatibility check throughout the whole process. Watch a video or two if you need help making a selection and to understand how to get your best value. I strongly recommend Paul’s hardware videos or Linus tech tips for a little more entertainment value.

3) now you’re ready to go on pcpartpicker and pick your full parts list with all the secondary pieces that complete your build, first the motherboard, your ram, SSD, psu, and case. From 1 you already have an idea of what you should go for, but now is your chance to tweak things around. If you don’t understand the difference between an nvme SSD or a sata SSD look it up, just like with software development, and you’ll find it easy to make an informed decision. Remember your cpu and gpu were the biggest difference makers so all of these decisions are relatively minor as long as you stick with reputable brands.

Iterate with 3 as much as you’d like. Post the build and see what people say about it. When you feel satisfied buy it and build it with the short YouTube videos demonstrating the exact parts you selected.


The low-effort way is to choose from a tier on https://www.logicalincrements.com/. Their component recommendations are pretty good, and they succinctly explain the different performance you can expect from different components.


I have the 1600. When comparing it to an i7 from a few years ago, although PassMark says the 1600 is faster, the i7 with integrated graphics on a laptop feels much snappier against my desktop with an NVIDIA GPU and twice as much RAM using Manjaro XFCE.

Just anecdotal but I'm still on the fence between Ryzen vs I series.


I built 5 workstations with these CPUs and RTX 2060 Super GPUs last Friday. Love the price/performance and power usage. Would have went with the GTX 1650 Super but wanted the raytracing and tensor cores (we use both).


Very unfortunate that no ML/math acceleration works with these chipsets.

The standard on the software side is CUDA..which is nvidia only.


That's completely independent of the processor. Nothing blocks you from running a Nvidia gpu with an AMD processor.


Ryzen come with bundled gpu. It's called Vega.

However you have to pay the Nvidia tax to do anything.

So a nice AMD ultrabook (which reigns on current price/performance charts) has the firepower, but no access to ML libraries.


The 3600 does not have a GPU. Their APUs are suffixed with G on the desktop, and U on the laptop.


Exactly. And besides, as nice as the integrated vega graphics are, they are not the kind of GPUs you'd use for ML. Way too slow for that and missing the fast vram.


You can very easily pair an AMD CPU and an Nvidia GPU.


>The standard on the software side is CUDA..which is nvidia only.

I was under the impression that AMD gpus had achieved some modest level of compatibility with the major frameworks?


Unfortunately not on the newest consumer cards yet. Their CUDA competitor is ROCM which isnt available on the Navi 10 cards i.e the Radeon 5600-5700 series. I've been watching the space closely and it seems like support might land relatively soon though.

(Performance as compared to CUDA on cards that are supported is not great however)


The AMD GPU acceleration layer is called ROCm . Zero support - not even with the brand new Google libraries like JAX.

https://github.com/RadeonOpenCompute


Could you expand on this?

I'm looking to gift a laptop that's good for ML stuff. What features should I be looking at?

Any specific recommendations?


You'd be looking for something with a discrete Nvidia graphics card with as many CUDA cores as possible.

Recommendations...depends on $$$...high end laptops are expensive as hell. Maybe something like a 1660 Ti?

Broadly I'd avoid ML work on a laptop though due to thermals


Unless you're trying to do ML with real time video or audio feeds, I'd just get a random cheapo laptop and do all your stuff remotely with something like jupyter notebooks (eg. Google Colab).

Having a remote GPU which is 10x more powerful than you can afford to buy yourself, and you can scale up to 50 GPU's at the click of a button is a far better workflow than spinning fans on your laptop overnight, only to wake up in the morning and realise a code tweak is needed and then you have to wait another 12 hours...


Get anything with a nvidia GPU. Without a nvidia GPU, it's pretty much dead


I wouldn’t go for a laptop for ML and I don’t care if CPU is Intel or AMD... parent commenter is mixing up CPU and GPU


And despite all these amazing reviews. AMD still isn't selling much CPU or APU at all. I am wondering why. On Steam Survey [1], AMD grew from November 2018 at 17.5% to April 2020 at 21.9%. That is less than 5% increase in total. ( I was actually surprised it had ˜18% in 2018 ) And despite EPYC offering, [2] it barely make 5% of the Server market even if AMD had zero revenue from console.

Why is that? Enthusiast has been overly excited they ignored the actual reality? Or what other factor at play? Inertia ?

[1] https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...

[2] https://www.anandtech.com/show/15754/amd-reports-q1-2020-ear...


Most people don't buy new processors every year, and pre-built machines (especially laptops) are slower to switch from Intel. Slow adaptation is part product lifecycle (laptop lead time is on the order of a year) and part customer demand: Non-enthusiasts consumers still want their i7, and businesses will often have requirements for Intel specifically, and will have to work to change that, if anyone internal even cares to.


This. Some of my friends are still running Intel's 4690k, which was released all the way in 2014, as that suffices for most of their basic tasks and games at low-mid settings, as a lot of gaming is GPU-bound.

With the typical 5-year upgrade cycle on CPUs for most non-hardware-enthusiast gamers and general-purpose users, I would expect the AMD share in Steam stats to drastically increase by about 2022.

Also, mind you, Steam numbers are not really representative of the overall market. Most online hardware retailers reported 50%+ of their high-end CPU sales to be AMD in Q1 2020.[0]

0. https://www.pcgamer.com/amd-market-share-gain-q1-2020/


Because the DIY market is reletively small. Intel still has huge sway with OEM's (and mindshare with consumers). It's difficult for AMD to overcome that even with a better product


Inertia, or lack of real reason to upgrade. Personally I'm still on a CPU from 2013(Intel Core i5-4570S). Havent really felt the need to upgrade. And pretty happy that I went with 16GB ram back then, since its still enough for me today(browsing/light gaming/coding). I've probably spent 10 times what that CPU cost on SSDs since then, so its not a cost issue for me. And more recently a Geforce 1070 TI. I've been thinking about upgrading, but I really need to see significant IPC lift to bother. So maybe a Zen3 or Intel next-gen sometime in 2021. If they manage to raise IPC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: