Now EVERY single Zen 2 chip is at least smidge faster than my 6700k in single threaded benchmarks AND close to 100% faster in multi-core benchmarks. So for $180 I can buy something that takes a huge meaningful shit on my (fairly nice at the time) 6700k (~$350 when I purchased it).
That's just crazy to me how much performance I can get for so little. I'm going to buy something beastly with at least 16 cores, but! I also plan on building a little cluster using mini-itx B550 boards and the 3300x. In fact, I'll probably build the little cluster first, because each 3300X is still faster than my 6700k and the total cost per system will be like $450!!one! It hasn't been since the orig Core-2-Duo days I've gotten such a meaningful upgrade for so little. Plus, when Zen 3 comes out it's a drop in upgrade.
AMD is delivering insane value to their consumers, and I love it. I just wish I could buy a Zen 2 chip in a laptop that doesn't look like it was made for a fourteen year old (no offense to any fourteen year olds). I heard someone say the lack of 4k and more professional style laptops could be Intel back-channel fuckery, but... there's also a chance no-one expected AMD, in a single fucking generation of CPUs, to sweep every single market.
I just wish I could buy a decent motherboard that doesn't look like a 14yo's first attempt at drawing an f-35. My current rig does this weird glowing thing at night when it is supposed to be off. For my next machine I would honestly pay more to NOT have RGB support.
I did get a laugh out of the integrated graphics, probably the lowest specs on the market: "DDR4 16MB" - that's not a typo. I'm a little interested in how that is accomplished. Can one allocated a 16MB chunk of DDR4?
The integrated graphics is for BMC, not for the console.
I had to use an ASUS motherboard and it took a good 10-15 minutes of poking through menus (with terrible keyboard navigation) to find the "magic rgb off" incantation.
If you happen to have an ASUS Prime Z370-A motherboard, here is the what you have to do:
boot and quickly press F2 or DEL to get into the BIOS (I had to do this more than once to poise my finger over the right key)
Press F7 to enter Advanced mode
Choose the Advanced menu (4th menu across top)
-> Onboard Devices Configuration (8th sub-menu in list)
-> RGB LED Lighting configuration (2/3 way down the page)
You have to disable both:
- when system is in working state
- when system is in sleep, hibernate or soft off states
I basically tried all obvious places, then all the menus before this one before I stumbled upon it. it was nuts.
Well, it hasn't improved tremendously, but it hasn't remained flat. You can get around 20% more in single core performance, if it's so important to you.
1. Intel/nVidia Contracts have OEMs their hands tied (which explains capped GPU in most Zen 2 laptops).
2. Lack of widespread Thunderbolt 3 support on AMD.
While I'm fully bought on reason 1., reason 2. is still hard to digest as one can still ship laptop with USB-C port supporting PD and DisplayPort (w/ alternate mode) and call it a day, and most users won't mind. I hope Zen 3 causes the power shift on laptops.
The idea that in 2-3 years i could just buy a used 3970 or even a 3990 to more than double the core count is amazing to me, and knowing that prohibits me from having my mind blown by the insane value of the smaller Ryzens.
Lenovo just released a couple ThinkPads peered by AMD Ryzen 7 Pro CPUs: T495, T495s and X395. They don’t have 4K screens though.
First of all, 20/20 vision is the average, not the best. Many people have substantially better than 20/20 vision. I remember laughing that I could read the super fine print "copyright notice" at the bottom of the eye test that is about 1/3rd the size of the smallest font in the test itself.
Secondly, the eye is complex, and has surprising capabilities that defy simple tests, which are designed as a medical diagnostic tool, not as a test of ultimate resolving capability. For example Vernier Acuity (1) means that much higher resolution printing (and screens) are required than one might think based on naive models.
For other uses, like fluid animations or games, you want as high as possible.
If we lived in an alternate universe where cinema was all 60fps and soap operas were 24 would we think that 24fps looked tacky instead?
On the other hand I think there's definitely some objective effects in play here too - CGI is a lot easier to make convincing at a lower framerate and added motion blur.
If you're in the latter crowd, you can configure X or Wayland to render to a 4k screen buffer, and then downscale to fit the actual screen. Yes, downscaling no longer means 1 pixel=1 pixel, which introduces some blur, but unless you're a 20/20 vision kind of person, I doubt you'd be able to tell without your nose touching the screen...
It offers exceptional value. Pair that with a GeForce GTX 1650 SUPER which is at the top of Performance/$ chart for GPUs: https://www.videocardbenchmark.net/gpu_value.html
And you have a champion of a workstation right there.
This is such a welcome change from the crypto days with GPUs in short supplies and at very high prices, and stagnant Intel CPU performance!
None of them are first-time PC builders tho, so it might have something to do with lockdown-induced boredom, and my friends deciding to challenge themselves to build a SFPC. As it sounds like a major pain and challenge compared to even a mid-tower (according to those of them who have already finished).
I've seen a number of people who have built tiny custom cases, and the challenge of both building something like that and getting everything to fit in it is super enticing. Definitely seems like the challenge and cool factor of fitting tons of power into something tiny is making a lot of people consider such builds over traditional mid or full tower machines.
New desktop Intel 10th Gen CPU are actually coming out pretty soon. It's got a new socket (LGA 1200), so new motherboards are coming out for it as well.
It's only marginally faster on single thread benchmarks than the Ryzen 3000s (ie Zen 2) chips, and consumes a lot energy.
Also, Intel is still using their circa-2104/ancient 14 nm process for these desktop 10th Gen chips, which is quite disappointing. No Ice Lake 10 nm desktop chips yet.
Both of which are heavily impacted by single-thread performance. So getting a high single-thread perf CPIU would be a good thing to do.
If it's selling for $30 I don't say, but I've seen them going for a fair bit of the original price.
Hard disk and GPU are the worst in my experience. They have the shortest lifespan and they're the most impactful when they die. Hard disk die suddenly destroying all data, self explanatory. GPU die slowly, progressively having rendering error then crashing the system erratically.
I've taken 3 GPU to their grave between my personal computers and my family's.
If your GPU is dying it’s VRAM or VRM is failing under high heat. Manufacturers cut fan speed to cut fan noise until warranty claims increase substantially so they never provide sufficient cooling at stock settings.
Rackmount servers don’t sound like jet engines for no reason. Things fail if you don’t do that.
The best case scenario is "selling it because I'm buying a new one", the worst case scenario is "selling it because it's starting to flake out", and "selling it because the warranty ran out" is not much better.
Also motherboards who's capacitors dry up; connectors that develop faults.
This is of course leaving out devices who's firmware get exploited and become poison (routers, USB devices with malware)
The ymmv is always big with electronics. I.e. my second last graphics card (gtx970) died within 2 yrs. But the aging effect from running 24/7 under high load is very real as well.
The high temperatures of permanently running on full power just wears it down
Mining GPUs run undervolted without powercycling. They're fine.
I gave three of them to family members and sold the other 4. I was absolutely upfront in my eBay listings of their history.
Also the RTX 2070 Super might be the cheapest performance card at the moment.
Charts like these can save you a lot of money. But it will depend on the usage of course.
You have to be careful though. For example, the Radeon RX 570 is right next to the GeForce GTX 1650 SUPER at the top of the chart, but the GeForce costs almost 40% more. If the GPU isn't your bottleneck you may be better off with the less expensive one and save money to buy a faster CPU or more memory. Whereas if it is, you may be better off spending more for a faster GPU than either of those.
What these charts are great for is to look at the top fifteen or so as the set of candidates, which will all have good performance/$, and then if you prioritize better performance look at the fastest ones and if not then look at the least expensive ones.
But even then you have to be careful. For example, on the CPU chart the Core i3-9100F is one of the best values. The Ryzen 3 3200G is only slightly faster for $17 more (i.e. almost 25% more). But the 3200G has an iGPU, which is worth a lot more than $17 if it means you don't have to buy a discrete GPU.
For context, their page on the 1080 (https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+1...) gives it a score / $ ratio of 20, while the lowest item on that page has a score of 30.
You could argue they should have some minimum threshold for performance before including them on the chart, certainly no one should be considering a r7 260 or gtx 770 for their new system, however cheap they've gotten, but they don't.
Note: this is NOT for gaming. Just everyday workflow and dev environments in 4k, 60hz.
You can probably run as many 4K monitors as the 1650 has video ports.
..which might easily land you in 2060/2070 territory.
Sorta here: https://www.cpubenchmark.net/power_performance.html
Unfortunately it's hard to tell which are embedded versus traditional sockets.
At the 3600 price point this is largely irrelevant though, since you will be GPU limited by way of budget anyway.
Games like SOTTR provide much better insight into the performance characteristics of modern engines instead of 2004 tech, both in CPU and GPU limited scenarios (depending on settings).
> What it does mean is in CPU limited scenarios Ryzen CPUs currently cannot touch Intel's CPUs.
CSGO, regardless of how old the engine is, is very much a CPU limited title, and not only do Ryzen CPUs "touch" Intel in CSGO, they beat it.
In SOTTR the 3600 is about on par with a 9600k: https://www.gamersnexus.net/hwreviews/3489-amd-ryzen-5-3600-...
It has also some really fantastic thermals, it does not produce that much heat. I use it in a fully fanless / passive cooling setup and it works great.
My thermals were pretty great with the stock CPU cooler, but I ended up going with a Noctua NH-U14S which might be a little overkill, but when playing Half-Life: Alyx, I stay between 45-60c which seems pretty good to me. The game runs at a perfect framerate on High settings with the AMD 5700XT, 32GB of DDR4 RAM on the Vive Pro.
I may end up upgrading to a Ryzen 7 4XXX series when they come out, but for now, this CPU is performing great. Def best bang for your buck.
Yes. I have the same cooler* on a Threadripper 3970X with 280W TDP. The only thing that it can't handle is a sustained all-core load for >10-15 minutes. Even at all-core load it still turbos above base clock (but less than max turbo).
Technically a different SKU with a larger base plate, but otherwise the same design.
One thing I didn't realize at the time is that the 3000 series is "end of the line" for B450 motherboards, which should be a real consideration for anybody eyeing a new system with this CPU now. As this article suggests it leaves you in an awkward spot. I opted to upgrade there to a X570, but only barely; a lucky decision, as it turns out!
Granted my case doesn't have the best airflow (NZXT h210i), I was getting pretty high CPU temps with the stock cooler. Averaging mid to uppers 80s during gaming, High 50s idle.
I went ahead and ordered a Noctua cooler which should be coming today actually. Hoping it helps.
That said, $30 aftermarket cooler will keep you at 50 degrees even during max load. Great chip for the money and I'm super happy with it
It took forever to get bundled, but you can get Lenovo and I think HP prebuilt machines for this as well - for work I don't like handbuilding a machine.
As GPU I had the Gigabyte GTX 1650 mini and the Gigabyte GTX 1660 TI mini. The 1650 is definitely easier to put in a passively cooled system, but the 1660 TI is much more powerful.
The 3900x has been selling for $400 which is also amazing value.
Intel CPU market is behind after many years. Long live AMD design + Taiwan manufacturing.
Except Intel still delivers better and more consistent gaming performance largely based on better overall architecture (L3, IMC and what connects these).
Both Intel and AMD are billion-dollar publicly traded companies. I don't get why people would fanboy either of them. I don't want AMD to win, because then they are going to abuse their position, just like Intel did. Conversely, I don't want Intel to win. Two players in a market is already problematic; trying for one player is just suicidal.
Intel has 5x the market cap and 10x the revenue compared to AMD, and AMD is the underdog in its dGPU competition with nVidia too! If you want innovation, you'd want to equalize investment in R&D for both these two competing pairs, it makes sense to be pro-AMD.
Even now Intel architecture is still superior in some areas, the biggest downside of it being the fact that they are stuck on the outdated 14nm++++ process.
Once the Intel's 7nm comes out there will be no reason to buy AMD, except for the price (I expect Intel's 7nm process to be roughly equivalent, or even superior to TSCM's 5nm)
They have been repeating that "lucky coincidence" for the past 3 generations of Ryzen (ofc you can nitpick that one of them wasn't a true gen, but more of a gen 1.5, but whatever).
3 extremely successful generations in a row seems a bit more than just a "lucky coincidence" to me.
Perhaps their current situation isn't as much a "lucky coincidence" as the payoff for a very successful long-term strategy?
Tell that to Microsoft and Sony.
Like it or not the Xbox and PlayStation shows what the future will bring for games on PC which is exactly the opposite of what Intel tries to sell.
I tried a light touch of overclocking and got absolutely nowhere. (Stock cooler)
Running that close to the edge is an achievement in itself though I guess
This always gets brought up as a use case, but how many people are building non-trivial codebases where compile times matter? If build times are under 5s, near perfect parallelization only nets you a 1.25s gain between 3700x and 3600.
You want to see a preview as quickly as possible. Best if you can keep a live view on a second monitor.
The only scenario I can foresee where people wouldn't be doing parallel builds is if they are just working on their personal side project with no other people involved, where the need for parallel builds is non-existent either way. You don't want to have one giant monolith project for your whole complex web app, it becomes a nightmare very quickly.
I dread having to update the compiler or some core "used nearly everywhere" header file, and wait a whole another 28 hours...
Turns out it was the CPU and how crappy AMD handle C-states that save energy, i had to turn them completely off and that resolved the issue (In AMD defense, freezing the entire OS do save a lot of energy)
This was exactly 2 years ago, and i had the issue for almost a year before figuring out the fix. I wonder if the new generation have this problem too?
From my experience, Intel is no better.
I have a 4 year old laptop with an Intel CPU that has the same problem, it was never fixed by Intel. [see EDIT]
For the first 3 years of use I just accepted the fact that my laptop would randomly freeze and I would have to reboot it.
One day, I got sick of it and started digging into forums until I found a way to avoid the specific c-state (by modifying kernel boot parameters) that caused the issue.
EDIT: Apparently, it's getting fixed now. Just 5 years late, issue was reported in 2015 https://bugzilla.kernel.org/show_bug.cgi?id=109051
In the end I switched to a 3700X; so far the problem hasn’t reoccurred. No doubt it will now that I’ve posted about it (fool me twice, shame on me).
I constantly had problems when attaching/deataching external monitor to my laptop, once in a while it wouldn't switch, or various parts of desktop (like task bar etc) would move around.
When they introduced PulseAudio, it wasn't ready at the time, so there were a lot of audio problems back then.
Overall the experience with Ubuntu was that every new release was taking some old bugs away and replacing them with new ones.
I hope things improved since then.
It turns out many PSUs don't handle low CPU power states well, and disabling C6 just prevents the CPU from being able to idle (and consequently, boost) properly. I've had perfect uptime with my 1600AF in a storage node since discovering this.
I've never had a problem running Intel based systems with Linux. So, even though everybody's talking about how good AMD is right now, and why you should buy one for your next rig, and how good are AMD graphic cards with open drivers instead of the nVidia binary blobs... I sincerely don't see any reason to ditch what has been working OK and make the switch when preparing a new build. Should I?
It wasn't until Ryzen 2000 / 3000 in particular that board vendors took the AMD stack seriously again.
I remember at release getting ram working on Zen 1 was a nightmare. Nowadays you can throw almost any stick to 3733mhz on Zen 2 no problem.
At home I've had a 3950x now for about six months and no such issues with the CPU at all. It just works and runs nicely. No freezes, no problems.
With that said, once the 3300x is out, I would likely go that route with a cheaper a320 motherboard. This would bring the cost of the cpu/motherboard to about $175 which is terrific value, and leaves you with more money for other components.
Motherboards don't need to have the xx. There is only B550. There is no other B5xx.
A320 supports Ryzen 1000-series, 2000-series, 2000-series APUs, and 3000-series APUs.
A320 does not support Ryzen 3000-series, and future Ryzen processors.
I can keep reading the linked chart for you, if you'd like.
There are several issues that come together to make it difficult to support all AM4 CPUs across all the AM4 motherboards. MSI seems to have committed to their AM4 motherboards supporting all AM4 CPUs, and they are having to perform some gyrations (and potentially some reverse-engineering of AMD-supplied binary blobs) to meet their claims.
 https://www.youtube.com/watch?v=T5X-8vZtml8&t=0s and https://www.youtube.com/watch?v=JluNkjdpxFo
So you should understand that the exchange that goes like this might be seen as someone ignoring facts brought to the table by a conversation partner:
Someone: incorrect statement (3000-series only on B550+)
Sometwo: correction, clarification, link to spec
Someone: supposition (I am almost sure)
Sometwo: precise specification to remove the need for supposition, link to spec again
Someone: criticism of sometwo for attitude regarding continued provision of vendor-supplied information in favor of someone's false statements and suppositions
If I'm reading this thread, for any hope of information about Ryzen compatibility, you've lied to me once, and given me a guess. I've provided links to manufacturer guidance and ruffled someone's feathers. But facts are more useful.
Are you certain you don't mean "stops boosting" when you say that?
I'm running exactly the same: a 3900X with the default included cooler. Yeah, it hits 95C and at that point it's running at 3.8 GHz instead of 4.4 (I never see 4.6 unless I boot Linux in single user mode).
But 3.8 is the base clock speed, so I don't think "throttle" is the right word for "running at base clock."
Pretty sure I've seen 75+ on mine 3700x
The downside is that there is evidence that such high temperatures increase electron migration (or something similar) in the chips themselves, leading to not infinite lifetimes. I want this computer to last 10 years, so I bought an aftermarket CPU cooler for $30 to keep temps closer to 50 degrees
Of course, if you do something like not attaching the heatsink at all, it's plausible that heat could spike fast enough to cook the chip before throttling or overtemp protection can kick in/shut the system off.
Here the build: https://pcpartpicker.com/b/p9D2FT
I only wish the AMD desktop and workstation machines could support more RAM. I have 256GB on the Intel machine, whereas 128GB was the max for the 3900x. I think the threadripper line only goes up to 256GB, which seems a little low for machines which such a large core count.
They make great stuff, but the value proposition disappears pretty quickly if you aren't building a gaming rig/workstation that already requires a dedicated GPU.
An 8-core zen 2 APU is expected
It's only Ryzen where the CPUs support ECC, but AMD doesn't require AM4 socket motherboards to implement it. (And you have to verify that a board advertising ECC actually implements ECC and doesn't just accept ECC modules.)
I'm in the same situation, looking to get a new workstation and want ECC. Performance-wise, I would be good with a 3900x or 3950x. But PCIe lanes and good IOMMU separation are also requirements for me, so it looks like I have to go Threadripper. I'm disappointed that there's no 16 core Threadripper that has the other benefits of a HEDT platform for people who don't need zillions of cores. The current generation motherboards are also disappointing in that they max out at 4 PCIe slots. Apparently this is because PCIe 4.0 is still rather expensive at the moment.
I did not know this. Do you have a source for this that I can share with others for reference in the future? My understanding was that Ryzen and Threadripper were the same on this front.
I do wish memtest86 had a rowhammer style test, as that should tell if ECC is working
Send a PR...
It looked to me like a modern AMD combo is the way to go - is that the general thinking these days? Are there issues you need to watch out for if you’re going to use them with Linux?
It’s important to note most CPU benchmarks are utter trash and many are on the payrolls of intel including CPU benchmark.net (thanks google for granting them incredible SEO).
My current build is still an 8700k but my next will most likely be AMD for a mostly gaming workload.
As far as figuring out what parts I would recommend three strategies:
1) start at reddit.com/r/buildapc and reddit.com/r/buildapcforme and checkout some of the threads at price points you are interested in. These builds will include pcpartpicker lists which are a tool that checks compatibility and general prices for you through a web ui. These builds are not perfect but will give you a general idea for how people are commonly building for the given ~6 month cycle. They all have their own biases, some like to save costs by getting a slightly weaker CPU, some spend more on the motherboard for future proofing, some need more storage so they throw in a giant SSD etc.
2) once you get a feel for the common components you’ll want to pick out your main two items: the cpu and gpu. You’ll already know your options from step one, so this step will be the icing on the cake. As long as you get a powerful cpu and gpu you will most likely be thrilled with your build (also an SSD and decent monitor for completeness). The cpu choice will also help you select a motherboard since that is really the biggest compatibility check throughout the whole process. Watch a video or two if you need help making a selection and to understand how to get your best value. I strongly recommend Paul’s hardware videos or Linus tech tips for a little more entertainment value.
3) now you’re ready to go on pcpartpicker and pick your full parts list with all the secondary pieces that complete your build, first the motherboard, your ram, SSD, psu, and case. From 1 you already have an idea of what you should go for, but now is your chance to tweak things around. If you don’t understand the difference between an nvme SSD or a sata SSD look it up, just like with software development, and you’ll find it easy to make an informed decision. Remember your cpu and gpu were the biggest difference makers so all of these decisions are relatively minor as long as you stick with reputable brands.
Iterate with 3 as much as you’d like. Post the build and see what people say about it. When you feel satisfied buy it and build it with the short YouTube videos demonstrating the exact parts you selected.
Just anecdotal but I'm still on the fence between Ryzen vs I series.
The standard on the software side is CUDA..which is nvidia only.
However you have to pay the Nvidia tax to do anything.
So a nice AMD ultrabook (which reigns on current price/performance charts) has the firepower, but no access to ML libraries.
I was under the impression that AMD gpus had achieved some modest level of compatibility with the major frameworks?
(Performance as compared to CUDA on cards that are supported is not great however)
I'm looking to gift a laptop that's good for ML stuff. What features should I be looking at?
Any specific recommendations?
Recommendations...depends on $$$...high end laptops are expensive as hell. Maybe something like a 1660 Ti?
Broadly I'd avoid ML work on a laptop though due to thermals
Having a remote GPU which is 10x more powerful than you can afford to buy yourself, and you can scale up to 50 GPU's at the click of a button is a far better workflow than spinning fans on your laptop overnight, only to wake up in the morning and realise a code tweak is needed and then you have to wait another 12 hours...
Why is that? Enthusiast has been overly excited they ignored the actual reality? Or what other factor at play? Inertia ?
With the typical 5-year upgrade cycle on CPUs for most non-hardware-enthusiast gamers and general-purpose users, I would expect the AMD share in Steam stats to drastically increase by about 2022.
Also, mind you, Steam numbers are not really representative of the overall market. Most online hardware retailers reported 50%+ of their high-end CPU sales to be AMD in Q1 2020.