Or actually, there are 2 bugs. Some random freezes, and heavy multithreading segfaults.
The bug was first reported in april yet to the date no narrowing down from either AMD or community.
This is the best hint we have so far: http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/b48d...
Personally I would like to build a Ryzen (possibly ThreadRipper, depending on pricing) computer this year, but that is definitely on hold until this issue is fixed.
The report seems to show Intel 4C/8T is doing better than AMD 8C/16T with much bigger L2/L3 cache config.
Is 7700K really that good? Can anyone from AMD explain this?
Intel (Kaby Lake) Core i7 7700K (91W, $339)
4C/8T, 4.2 GHz, 1MB L2, 8MB L3 17.81
AMD (Zen) Ryzen 7 1800X (95W, $499)
8C/16T, 3.6 GHz, 4MB L2, 16MB L3 16.32
I have a sad HomeServer (J1900 BayTrail).
I am not sure, but i think it may have been the F00F bug.
The one I'm talking about is comment #338 in the AMD community thread:
That in turn links to this LKML message:
which says: "this problem happens with ECC memory and memtest86 clean memory".
Create a Docker image which reliably crashes all the time on a Ryzen system but not on others.
As a old Barton user it's really exciting to see AMD climb its way out of nearly a decade of darkness. A well deserved kudos to Lisa and the team.
When we talk about female CEOs she is rarely mentioned but here she is in probably one of the most technology intensive industries with a company that was clearly floundering and she has led AMD confidently out of the woods into a position of strength. What a performance.
This. It seems amazing to have a technical CEO (she has a PhD from MIT) who also has management and turnaround ability.
When you look at successful CEOs in general, they are mostly promoted from within, after working their way up the ranks. That she has only been at AMD for a couple years total and seems to be doing well is more surprising than that she has a technical degree: technical degrees imply enough intelligence to figure out how to do management tasks if you want to take that route.
Only when it comes to CPUs are two brands a "staggering" amount of choice. Something is wrong with this business, and I think it's the ISA patents. If Oracle can't patent their software API, then Intel/AMD shouldn't be able to patent their ISA either. An ISA is just a hardware API.
If a competitor to AMD or Intel came out with a chip that requires recompiles, but is significantly ahead of them in a key metric, it would sell in the server market.
But competing with Intel and AMD takes billions of research, and well, the design of modern CPUs is surely a patent minefield too.
Just look at what's (not) happening with ARM servers.
And in non-gaming performance looks even better.
Edit: Or maybe not. In http://www.gamersnexus.net/hwreviews/3001-amd-r3-1300x-revie... there is still a comfortable lead of the i5s over the overclocked 1300X, in that different game selection. I will have to wait to see what my meta-benchmark says.
Anyway, Raven Ridge will be released in laptops first. Desktop Raven Ridge will not be released in desktop formats until 2018.
In general Ryzen's power consumption is not bad.
TDP doesn't mean anything about what the CPU will produce under actual loads, it's the budget that the OEM should provide to satisfy the performance target of the CPU.
Simple example of this is the 51W TDP Intel chips consistently used more power (and thus produced more heat) than the 65W TDP Intel chip. The reason is Intel is OK with the 51W TDP chips thermal-throttling more than they are the 65W TDP part, because that's the performance they are selling.
Under full load the Ryzen 3 1200 was power-competitive with the Intel offerings, whereas the Ryzen 3 1300X and Ryzen 5 1500X used 20w more power.
20W is not going to be that noticeable, though.
I'm building a small cluster of i3 NUCs, but I like AMD so this is intriguing. Power efficiency is important to me, and 20W is less negligible when it's multiplied.
Looks like they do pretty well vis-a-vis Intel.
I am still eagerly waiting for Zen + Vega APU.
Other stuff, that's just using MSVC/gcc/clang/llvm code targeted to Skylake/Haswell etc will run just fine, as Zen's microarch is similar enough.
AMD's own compiler is like a 3% performance difference for us, compared to regular clang. No big influence, basically.
AMD announced a refresh of their APUs but it's probably just a half-hearted stop-gap aimed at those wanting current-gen Vega-based graphics but too impatient to wait for Zen-based compute. The holy grail of Zen compute/Vega graphics systems are coming down the pipe but probably just in time for the 2017 back-to-school season, if not Black Friday/holiday season.
I build stuff with just the "-j" option (unlimited jobs) to Make all the time. Had a 75 load average the other day. Also Rust servo compiles which aren't GCC but certainly load down the machine.
Processor errata, chipset errata, bugs in the Linux kernel, bugs in the compilers, and application issues. There is so much opportunity for strange behavior. It takes a long time to work it all out.
Intel has had people contributing patches for a very long time. AMD had better get a team on it, if they don't already.
Well, I would say that ASRock is not the best motherboard brand around. Try Asus or MSI next time.
Having said that, they're one of the smaller manufacturers, and it wouldn't surprise me if they're still in the process of getting on top of some of the issues that inevitably come up with a new platform.
It also wouldn't surprise me if GP got a lemon. It can happen with any manufacturer.
As it is, though, I'm a very happy customer of theirs.
Desktop computing is actually pretty good at utilizing many cores these days: the booting process, starting non-trivial applications, using full-disk encryption, running multiple tabs in a browser (or just having a browser open with a few tabs + something else), in all of these conditions 4c/4t CPU will provide tangible, perceivable difference. 2c/4t CPUs are obsolete and I wouldn't recommend one even for an entry-level computer.
Can we take a moment...stop...and all re-read this again, and weep for where we've landed with all of this technology?
I have four Newegg tabs, one Amazon tab, one Verge tab, and one Engadget tab open in Firefox. CPU utilization is 3% on this ancient i3 laptop.
Maybe your experience isn't typical. That's all I'm suggesting.
Just unsuited for you.
Unlike Intel where you have to pay more $$$ for a K CPU and a top chipset (Z170/Z270) board.