Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen 3 1300X and Ryzen 3 1200 CPU Review (anandtech.com)
154 points by mcone on July 27, 2017 | hide | past | web | favorite | 88 comments



I'm still waiting for this nasty hardware bug to be addressed: https://community.amd.com/message/2796982

Or actually, there are 2 bugs. Some random freezes, and heavy multithreading segfaults.


It would be nice if someone actually managed to pin this down. It appears to primarily affect people that compile with haswell optimizations.

The bug was first reported in april yet to the date no narrowing down from either AMD or community.

This is the best hint we have so far: http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/b48d...


I happens to me very consistently when I compile with -march=znver1.


This should get more recognition from the media. Especially as AMD are launching server and high-end CPUs with high core-counts (where I assume a large part of the market will be programmers), getting 120fps instead of 140fps in ${SOME_GAME} is irrelevant compared to unpredictable crashes during `make -j 16`.

Personally I would like to build a Ryzen (possibly ThreadRipper, depending on pricing) computer this year, but that is definitely on hold until this issue is fixed.


Are there any performance benchmarks associated with building software? Similar to your point, gaming benchmarks are totally meaningless to me, but I would love to learn about the difference in time it takes to build some large software (e.g. Chrome) on various cpus.


AnandTech runs a Chrome Compile benchmark in every review under the office benchmarks section.

http://www.anandtech.com/show/11550/the-intel-skylakex-revie...

http://www.anandtech.com/bench/CPU/1857


Chrome compilation should be heavily parallelized. (It is my core use case. I only care about system big project compilation speed. )

The report seems to show Intel 4C/8T is doing better than AMD 8C/16T with much bigger L2/L3 cache config.

Is 7700K really that good? Can anyone from AMD explain this?

Intel (Kaby Lake) Core i7 7700K (91W, $339) 4C/8T, 4.2 GHz, 1MB L2, 8MB L3 17.81

AMD (Zen) Ryzen 7 1800X (95W, $499) 8C/16T, 3.6 GHz, 4MB L2, 16MB L3 16.32


Wow that was fast and exactly what I meant. Thanks again


Phoronix included Linux compilation benchmarks when it reviewed the Ryzen 1800x: http://www.phoronix.com/scan.php?page=article&item=ryzen-180...


More recogniced just like the Intel bugs on their BayTrail and appraently also the new Apollo Lake(same CPU family). https://bugzilla.kernel.org/show_bug.cgi?id=109051 - check how old it is...

I have a sad HomeServer (J1900 BayTrail).


Reminds me of when i first fiddled with Linux, and while checking out dmesg noticed a line up top about having detected a bug in the CPU and deploying a workaround.

I am not sure, but i think it may have been the F00F bug.


It would be super awesome of any of those bug reports included whether or not they are using ECC ram. Occasional segfaults from compilers (which touch lots of ram) could be explained by bit errors occurring in memory.


That thread is long, and there are a lot of users reporting the issue, but at least one of them claims that it still happens even with ECC memory.

The one I'm talking about is comment #338 in the AMD community thread: https://community.amd.com/message/2813391#thread-message-281...

That in turn links to this LKML message: https://lkml.org/lkml/2017/7/25/1295

which says: "this problem happens with ECC memory and memtest86 clean memory".


Awesome! thanks for finding that, I gave up a couple of hundred comments in ...


It seems unlikely that issues caused by bit errors in memory would go away when the uop cache is disabled, like so many people are reporting.



Has there been any confirmation that it's actually a hardware bug and not just a codegen or other issue in GCC?


If it was in GCC it would be possible to get a setup that makes it consistently reproducible, nor probabilistic as it is now.


It doesn't happen just with gcc. But AMD so far didn't comment on the issue, except that they are looking into it.


Looks like a good use-case for something like Docker.

Create a Docker image which reliably crashes all the time on a Ryzen system but not on others.


Why docker over anything else that uses the CPU?


My guess is state preservation, for the purpose of making the bug happen more reliably.


A static binary serves the same purpose. Docker is actually worse because you can't be sure if the bug is caused by your particular combination of Docker version, kernel version, networking setup and moon phase.


Just from an year ago the cpu market has changed completely. The sheer amount of choice at all levels is staggering. For the mid level user the 1600 especially is a formidable offering, and the 1700 with 8 cores just ups the ante.

As a old Barton user it's really exciting to see AMD climb its way out of nearly a decade of darkness. A well deserved kudos to Lisa and the team.

When we talk about female CEOs she is rarely mentioned but here she is in probably one of the most technology intensive industries with a company that was clearly floundering and she has led AMD confidently out of the woods into a position of strength. What a performance.


> When we talk about female CEOs she is rarely mentioned but here she is in probably one of the most technology intensive industries with a company that was clearly floundering and she has led AMD confidently out of the woods into a position of strength. What a performance.

This. It seems amazing to have a technical CEO (she has a PhD from MIT) who also has management and turnaround ability.


She might have a technical degree, but they didn't take she from layout out a circuit board (or whatever) and make her CEO. She has been working her way up the management chain for at least 15 years. She has been successful in all that effort, and so she was rewarded with more responsibility, a task she has rising to.

When you look at successful CEOs in general, they are mostly promoted from within, after working their way up the ranks. That she has only been at AMD for a couple years total and seems to be doing well is more surprising than that she has a technical degree: technical degrees imply enough intelligence to figure out how to do management tasks if you want to take that route.


AnandTech's interview with Lisa Su on Ryzen Launch:

http://www.anandtech.com/show/11177/making-amd-tick-a-very-z...


>The sheer amount of choice at all levels is staggering.

Only when it comes to CPUs are two brands a "staggering" amount of choice. Something is wrong with this business, and I think it's the ISA patents. If Oracle can't patent their software API, then Intel/AMD shouldn't be able to patent their ISA either. An ISA is just a hardware API.


It's not just ISA.

If a competitor to AMD or Intel came out with a chip that requires recompiles, but is significantly ahead of them in a key metric, it would sell in the server market.

But competing with Intel and AMD takes billions of research, and well, the design of modern CPUs is surely a patent minefield too.

Just look at what's (not) happening with ARM servers.


Well, some healthy competition in the budget-friendly CPU market, and a good alternative to Intel's i3 (slacking) lineup. Great news for customers!


I'd pit them against the i5. Look at the gaming benchmarks, the 1300X is better in some and worse in some other games, but in general on par. While being cheaper and overclockable. I would also be surprised if one could not overclock the 1200 close to the level of the 1500X, and then that $100 cpu will beat the 7400 and be on par with the 7500.

And in non-gaming performance looks even better.

Edit: Or maybe not. In http://www.gamersnexus.net/hwreviews/3001-amd-r3-1300x-revie... there is still a comfortable lead of the i5s over the overclocked 1300X, in that different game selection. I will have to wait to see what my meta-benchmark says.


Well, iirc Ryzen 5, gaming performancr got better some time after release, both due to developer optimization processes, and AMD's BIOS updates. i5s should (and probably will be) ahead of R3, but that's more R5's ballpark.


That's sadly unlikely to happen again for the R3, as it is the same architecture as the R5. It will already profit from the prior optimizations. Maybe some gains where hyperthreading was assumed. I checked some benchmarks more by now and it really seems like your last sentence is right: The i5 is in R5s ballpark, the R3 can't reach it.


They are missing an iGPU, especially in that price segment.


Bristol Ridge APUs were launched today. AM4 socket, DDR4 capable, but it's an updated version of the older architecture, not Zen.

http://www.anandtech.com/show/11669/amd-releases-bristol-rid...


We're already a month into the promised release period of 2H17 for Raven Ridge. Let's hope it comes out closer to the beginning of 2H17 than the end.

http://wccftech.com/amd-raven-ridge-ryzen-apu-vega-gpu-leak/


We will likely see Zen-based laptops before we see Zen-based standalone APUs, with laptops coming for the holiday season (ie: Oct/Nov) and APUs late 1H18.


I wouldn't mind seeing a good Zen based laptop... Something around an R7 1700 would be nice, maybe 10-20% lower clock for thermals. Something capable of 64gb ram and user serviceable nvme and/or ssd would be great.


There's a decent amount of small form factor Intel stuff on laptop chips (Chromebox, Asus Vivo Mini, etc), if socketed apus aren't forthcoming, that might be something that happens with AMD chips too.


The fact that they announced it as 'second half' as opposed to 'third quarter' should tell you enough.

Anyway, Raven Ridge will be released in laptops first. Desktop Raven Ridge will not be released in desktop formats until 2018.


While I absolutely agree and want to see much more competition - I believe we /need/ to see ECC across the board and of course lack of an integrated GPU seems like it renders the most common deployments of this range of CPU less attractive.


Are the TDP figures to be trusted? How does the heat produced compare with Intel offerings? The last time I bought an AMD my PC felt like a hair dryer, will I regret going AMD again?


That's frankly a valid question, since the prior generation of AMD chips was really bad in this area. But the article has a whole page about power consumption. AMDs TDP rating normally can't be trusted, because they compute it in very strange ways, but in this case the full load power usage of the 1500X is only 2W above 65W, so it checks out. The R3 1300X always stays below, and the slower clocked 1200 even uses a lot less.

In general Ryzen's power consumption is not bad.


Just look at the power consumption metrics in the review. If it used less power, it produced less heat. Nearly all the power that's used is directly converted to heat.

TDP doesn't mean anything about what the CPU will produce under actual loads, it's the budget that the OEM should provide to satisfy the performance target of the CPU.

Simple example of this is the 51W TDP Intel chips consistently used more power (and thus produced more heat) than the 65W TDP Intel chip. The reason is Intel is OK with the 51W TDP chips thermal-throttling more than they are the 65W TDP part, because that's the performance they are selling.

Under full load the Ryzen 3 1200 was power-competitive with the Intel offerings, whereas the Ryzen 3 1300X and Ryzen 5 1500X used 20w more power.

20W is not going to be that noticeable, though.


It depends where that 20W is going... on the floor or into your lap and hands?

I'm building a small cluster of i3 NUCs, but I like AMD so this is intriguing. Power efficiency is important to me, and 20W is less negligible when it's multiplied.


I don't see why you wouldn't (particularly) trust the measured power draw (as opposed to everything else in the review), which is what really matters anyway:

http://www.anandtech.com/show/11658/the-amd-ryzen-3-1300x-ry...

Looks like they do pretty well vis-a-vis Intel.


Is AMD planning to release something with integrated graphics? It's nice that they're making parts available without it, but it seems like a lot of people buying cheap CPUs would rather not buy a separate GPU.


In the beginning of the article, they state "Zen paired with graphics, coming in Q3/Q4"


Ah, I missed it. Thanks.


Raven Ridge.


While the benchmarks shows AMD is doing well, I wonder How much disadvantage are AMD Zen getting with the current compiler and software optimizes for Intel?

I am still eagerly waiting for Zen + Vega APU.


Stuff using Intel's compiler or maths libraries is going to suck, somewhat.

Other stuff, that's just using MSVC/gcc/clang/llvm code targeted to Skylake/Haswell etc will run just fine, as Zen's microarch is similar enough.

AMD's own compiler is like a 3% performance difference for us, compared to regular clang. No big influence, basically.


Zen+Vega APUs will probably come early next year and I've seen anecdotal reports suggesting to hold off for Zen+Navi in late 2018/early 2019.


What about Zen powered laptops? What is holding them back?


Zen CPU sales are expected to be brisk for some time, supply yields are likely an issue as well as laptop OEMs probably feeling skittish about pricing.

AMD announced a refresh of their APUs but it's probably just a half-hearted stop-gap aimed at those wanting current-gen Vega-based graphics but too impatient to wait for Zen-based compute. The holy grail of Zen compute/Vega graphics systems are coming down the pipe but probably just in time for the 2017 back-to-school season, if not Black Friday/holiday season.


I also would like to know. I'm about to upgrade my laptop and would like to see some viable amd options.


I bought a Ryzen desktop with an asrock motherboard a few weeks ago. It was very unstable. Installed Ubuntu 17.04 with upgraded kernel. Most crash / lock ups happened when running vmware workstation. I returned the desktop and now have a very stable Intel I7. It seems like the issue was a combination of the CPU and the motherboard. I assume Ryzen + asrock will run fine on Windows 10.


My personal anecdote has been very positive. I've been running a stable 1800X on an X370-Pro overclocked to 4GHz as my desktop since the end of April. Running Fedora 25 with latest updates (currently on a 4.11.10-200.fc25.x86_64), although it's been stable from the beginning. I use Docker heavily for development as well as various retro game emulators (for n64, psx, dosbox) and VirtualBox. No stability issues.


That's weird. I've got a 500 GB postgres database running on a 1700X doing a couple hundred queries per second around the clock and it hasn't gone down on me yet.


I have been running a 1800X on Arch Linux which has been rock solid, using 32GB of ECC memory and very compile heavy workloads. Might be a bad board?


I run a 1700X with 32 GB ECC using Fedora 26 and it has been very stable. ASUS X370-PRO board.

I build stuff with just the "-j" option (unlimited jobs) to Make all the time. Had a 75 load average the other day. Also Rust servo compiles which aren't GCC but certainly load down the machine.


Does the kernel in 17.04 fully support everything? Sometimes it takes a while to get support for new hardware in the kernel.


It's probably not a question of support, but rather bugs.

Processor errata, chipset errata, bugs in the Linux kernel, bugs in the compilers, and application issues. There is so much opportunity for strange behavior. It takes a long time to work it all out.

Intel has had people contributing patches for a very long time. AMD had better get a team on it, if they don't already.


Did you upgrade the BIOS? Maybe it was the RAM? I'm running an ASRock AB350 Pro4 with a 1700 and Linux without problems.


Yes, but figuring that out took a couple of hours. I had to install an old (new to me) version to get the instant flash bios update feature. The newest bios versions can only be installed using windows or instant flash.


Can you please report your findings in this thread?

https://community.amd.com/thread/215773


You are not alone. The community and... apparently AMD as well are working on it. tl;dr: there is a machine hang problem and a spurious segfault problem. Both have maybe acceptable almost reliable workarounds. AMD takes RMAs. For most(? - at least some) people the replacement CPU is without the known problems. https://community.amd.com/thread/215773


Might just have been a faulty CPU or motherboard. Happens :(


>asrock

Well, I would say that ASRock is not the best motherboard brand around. Try Asus or MSI next time.


Can offer an opposing anecdote. I have an ASRock EP2C602-4L/D16, loaded with tons of RAM, hard drives and PCIe cards running a bunch of VMs (some with GPU passthrough) and it's been 100% stable. Plus their support has been awesome -- I had an issue with a bent pin and they did a complete motherboard check for me for free, then threw a bunch of SATA cables in the box when they sent it back.

Having said that, they're one of the smaller manufacturers, and it wouldn't surprise me if they're still in the process of getting on top of some of the issues that inevitably come up with a new platform.

It also wouldn't surprise me if GP got a lemon. It can happen with any manufacturer.

As it is, though, I'm a very happy customer of theirs.


True. Some statistics I've seen once showed that Asus is the most reliable, but I got a lemon from them once. It's all just probabilities.


I've run into at least one annoying firmware bug on every desktop motherboard I've used in the past decade, including a flagship product from ASUS. Nobody's perfect or even close to it. The two ASRock boards I have in use at the moment don't seem to be any worse than the two ASUS boards I'm using.


They used to be the cheap option but that's the past. The ASRock Taichi is actually one of the best X370 boards.


I agree. New board is MSI. MSI Bios features and interface are impressive.


I know that is your problem. You are using a AsRock MBO. The only time that I boutght somethign from they was a MBO that end failling becasue was BENDED! You should try something better like Asus or GigaByte


I have a 1700X with linux 24/7 running like this: https://i.imgur.com/KXARxUE.png, no issues so far.


IMHO, the single-thread performance/dollar graph at the end of the article says it all. At this price range, I've found that workloads are still mostly single-threaded. The Intel parts are still the king with their decent clock rates and their deep pipelines. The Ryzen 3 1200 is a total dog.


"Total dog" for being 7-13% slower-per-thread but offering twice as many cores? This is ridiculous. I am typing this on a i5 Mac with 2C/4T and the difference in using common office applications (browser, Excel, etc) vs 4C/8T i7 is _insane_. Just using newegg.com is borderline impossible with just a few tabs open unless you have 4 non-SMT threads.

Desktop computing is actually pretty good at utilizing many cores these days: the booting process, starting non-trivial applications, using full-disk encryption, running multiple tabs in a browser (or just having a browser open with a few tabs + something else), in all of these conditions 4c/4t CPU will provide tangible, perceivable difference. 2c/4t CPUs are obsolete and I wouldn't recommend one even for an entry-level computer.


Just using newegg.com is borderline impossible with just a few tabs open unless you have 4 non-SMT threads.

Can we take a moment...stop...and all re-read this again, and weep for where we've landed with all of this technology?


Just using newegg.com is borderline impossible with just a few tabs open unless you have 4 non-SMT threads.

I have four Newegg tabs, one Amazon tab, one Verge tab, and one Engadget tab open in Firefox. CPU utilization is 3% on this ancient i3 laptop.

Maybe your experience isn't typical. That's all I'm suggesting.


Does the iMac have an SSD? Because that makes a huge difference in UI responsiveness even in web browsing, much more than the CPU at that level anyway.


My experience has been the opposite. For running a browser, I'll take a faster dual core over a slower quad core. Simultaneous tabs are of little use to me if I can't scroll smoothly through Amazon.com.


It not being the ideal choice for your specific use-case does not in any way make it a "total dog".

Just unsuited for you.


I suggest you actually read the article and take a look at the ST performance/price graph that I was referencing. The 1200 is an outlier.


Unless you are using the chips for something like compiling source code or running VMS, in which case the workloads scale almost linearly and the Zen lineup ends up looking like great value for money.


I said "in this price range". Who is using the lowest-end chips? It's not coders or creators. It's not gamers. It's people browsing the web. The responsiveness is the biggest performance metric in that case, and is directly related to straight-line performance.


I grew up in a developing country, for a long time my computer purchase decisions were overwhelmingly driven by price. Cheaper SKUs see far greater diverse use than you suspect.


All Ryzen CPUs are unlocked so you can run the 1200 at ~3.9GHZ, 4 if you're lucky. And the cheap B350 boards support overclocking.

Unlike Intel where you have to pay more $$$ for a K CPU and a top chipset (Z170/Z270) board.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: