
AMD Zen and Ryzen 7 Review: A Deep Dive - jsheard
http://www.anandtech.com/show/11170/the-amd-zen-and-ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700
======
fcanesin
From the reddit AMA today with Lisa Su (CEO):

7th) Will all Zen products have all of the instruction sets and platform
extensions, or could lower end chips lose features like virtualization?

A7: In the consumer client space we have no plans to turn off virtualization
or features.

9th) Does AM4 / consumer ZEN support ECC memory?

A9: ECC is enabled on Ryzen and AM4.

[[https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...](https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/)]

edit: added link

~~~
mee_too
ECC is a feel good feature, but is mostly useless. Memory either is corrupted
(which can be detected by running MemTest) or it works perfectly fine without
ECC. I have over 50 10+ year old Itanium servers with 192GB ECC RAM each,
there are 2 ECC errors logged total, while running 24/7.

~~~
photon-torpedo
To word your statement differently: without ECC, your memory either works fine
or is corrupted. But you don't know which one it is, until you run memtest.
How often do you do that?

Personal anecdote: Upgraded RAM in my PC, ran memtest, everything fine. A year
later, I need to move and so I consolidate all my data on a big hard disk. As
the disk is new, I run some verification and notice some non-matching
checksums. Long story short, the new RAM had become defective, and a new
memtest showed a handful of errors after a few hours. Had I had ECC on that
machine, I probably would have seen notifications about memory errors even
before that day (plus the errors would likely have been corrected).

If Ryzen allows us to build a powerful PC with ECC for little more money than
a non-ECC one, I'll happily pay for this "feel good feature". The current ECC-
capable alternative from the Intel universe (Xeon CPU & motherboard) is simply
too expensive for most people.

~~~
lewisl9029
> To word your statement differently: without ECC, your memory either works
> fine or is corrupted. But you don't know which one it is, until you run
> memtest. How often do you do that?

Definitely this. The extra safety of mind offered by ECC RAM is totally worth
the tiny price difference for me. The only thing that stopped me from actually
using it was the lack of (affordable) CPU support. Ryzen changes that.

On that note, I hope it's not long until we can easily buy ultraportable
laptops with ECC RAM without paying through the nose for Intel's ridiculous
prices for their mobile Xeon CPUs.

~~~
sundvor
I can't wait to see more than 4 cores in a laptop!

------
vxxzy
Wow. AMD has really pulled through on this one. They are really giving Intel a
run for their money. This is going to be great for consumers. What is
interesting is the HUGE jump in multi-threaded performance on AMD. Very
interesting.

~~~
yread
But it's still losing in every web and office single threaded benchmark. EDIT:
compared to 7700k that costs about the same

~~~
olegkikin
But it wins in every case where performance truly matters: multi-threaded
calculations, archiving, encoding. You know, the things that actually do take
up time.

Do you really care if your webpage renders for half a second longer?

~~~
teilo
No, it doesn't. Their hyperthreading model is clearly not as good as Intel's.
Take a look at the Handbrake tests, which take full advantage of
hyperthreading. The 7700K, with half the cores, is only a few frames/sec.
behind.

If I was after encoding performance, I wouldn't buy a Ryzen. If I was after
rendering performance, I wouldn't buy a Ryzen. Throw an mid-range nVidia on
there for OpenCL, and the only CPU that makes sense for that price range is
the 7700K.

~~~
slantyyz
From everything I've seen, the Ryzen's main appeal is bang for the buck.

If you're after pure performance, then you simply go with the best for your
use case, whether it is Ryzen or not.

If you're on a tight budget, however, things get interesting.

~~~
Geee
It's not so clear, because 7700K offers more bang for the buck in single-
threading cases, which is more important in typical computer use.

~~~
slantyyz
> It's not so clear, because 7700K offers more bang for the buck in single-
> threading cases

hence, "things get interesting"

------
dmm
> ... this means that the base memory controller in the silicon should be able
> to support ECC. We know that it is disabled for the consumer parts, but
> nothing has been said regarding the Pro parts.

ECC should be a standard feature. If you don't want it you don't have to use
it but to disable simply for market segmentation is lame.

~~~
octoploid
Anandtech is wrong and ECC is not disabled at all. For example all ASRock AM4
boards support ECC memory.

~~~
dmm
Thanks for pointing this out. I found this on the specification page[0] for
the ASRock "X370 Killer SLI/ac" AM4 board:

> AMD Ryzen series CPUs support DDR4 2667/2400/2133 ECC & non-ECC, un-buffered
> memory

So the board seems to accept ECC memory, hopefully that means the memory
controller actually performs ECC? I found a reddit comment claiming that the
boards accept ECC memory but don't perform error correction[1]. Not a great
source but seemingly possible.

Do you have any more sources suggesting that ECC is actually being
implemented?

For example, BIOSs often have ECC settings, if we had some screenshots of an
AM4 board bios showing ECC setting that would be strong evidence.

[0]
[http://www.asrock.com/mb/AMD/X370%20Killer%20SLIac/index.asp...](http://www.asrock.com/mb/AMD/X370%20Killer%20SLIac/index.asp#Specification)

[1]
[https://www.reddit.com/r/Amd/comments/5v0cqo/ryzen_supports_...](https://www.reddit.com/r/Amd/comments/5v0cqo/ryzen_supports_ecc_memory/de4rfer/)

~~~
octoploid
Several people contacted ASRock in the last few days. And the reply always was
that ECC is fully supported, e.g.
[https://news.ycombinator.com/item?id=13762950](https://news.ycombinator.com/item?id=13762950)

~~~
dmm
That's good information but still doesn't tell us that these cpus support ECC,
right? Maybe the motherboard fully supports ECC when used with a future
1800X-PRO but not with the 3 chips reviewed here.

In that case anandtech saying "We know that it is disabled for the consumer
parts" and ASRock saying "ASRock AM4 motherboards fully support ECC function"
can both be true.

~~~
octoploid
AMD will use the same die for its server CPUs, so ECC support is on chip
anyway. I see no reason why they should turn it off in the Ryzen CPUs launched
today.

~~~
dmm
It looks like you are (thankfully!) right. The CEO of AMD confirmed ECC is
enabled on the consumer chips. This is an awesome release overall!

[https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_crea...](https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def6vs2/)

------
timdafweak
Impressive! While it doesn’t utterly destroy Intel, AMD does offer a MUCH
better price/performance ratio. Things have gotten much more interesting
indeed. Intel’s dominance is being called into question. As a result, we all
profit.

~~~
clarry
It offers much better price/performance ratio if the seriously multi-threaded
use scenarios matter for you (that is where you compete with Intel's seriously
overpriced >$1000 chips).

It seems that in common use (web browsing, office, gaming), fewer but stronger
cores still shine and even the fastest Ryzen is slower _and_ more expensive
than Intel's offering.

So it _really_ depends. A lot.

~~~
compton_effect
Well...for the time being. When the R5 and R3 get released in Q2, we will have
6 and 4 core versions with higher clock speeds so that might help a bit. Plus
based on the pricing, I'm gonna bet a 4 core ryzen will be a bit cheaper than
a 4 core kaby lake.

------
robotmay
The 1700 is really well placed, often seems to beat the i7 7700k in benchmarks
whilst drawing less power and costing roughly the same/cheaper. Probably even
better value if the stock coolers are as solid as they sound. Definitely going
to be a popular chip with gamers (I'm very tempted myself).

Hopefully more software starts to make use of multiple cores more effectively.
It's not like it's a new feature of chips any more.

~~~
MikusR
Which review has power usage comparison?

~~~
jhasse
computerbase.de has here: [https://www.computerbase.de/2017-03/amd-
ryzen-1800x-1700x-17...](https://www.computerbase.de/2017-03/amd-
ryzen-1800x-1700x-1700-test/6/)

    
    
                       R7 1700         i7 7700K
        Cinebench      120 W           112 W
        Prime 95       128 W           145 W

~~~
tutanchamun
Doesn't Prime95 use AVX2 and FMA3 since version 28.5, which pushes Intel's
power consumption higher? Can that be disabled?

~~~
jhasse
Yes, that's probably the reason for the increased power consumption. But Ryzen
also supports AVX2, doesn't it?

~~~
tutanchamun
Ah yeah, remembered it false. I thought Ryzen had no AVX2, but actually it has
it, but at half the speed.

~~~
hajile
Ryzen is half as wide, but executes at full speed. Intel drastically reduces
clockspeeds when executing AVX.

[http://www.intel.com/content/dam/www/public/us/en/documents/...](http://www.intel.com/content/dam/www/public/us/en/documents/white-
papers/performance-xeon-e5-v3-advanced-vector-extensions-paper.pdf)

------
izacus
If I read the specs correctly, Ryzens don't have GPUs integrated. Are there
any motherboards out there with GPUs on board? Or do you always have to add a
150EUR+ graphics card to it (assuming I want at least 2x 4K display support)?

The 1800X does seem like a killer value for the money for us C++ people :)

~~~
trueSlav
The fact that it doesn't have an integrated gpu is a GREAT THING. This is an
enthusiast/server CPU, if you can't be bothered to buy a cheap dedicated
graphics card, wait for the mobile/apu Zen.

~~~
izacus
It's mostly also a packaging issue - I don't need 3D capabilities, so a basic
4K supporting Intel GPU works well and doesn't require me to add a GPU to a
chasis (which can then be mini-ITX).

Hence my question about having GPU integrated to the motherboard like older
boards had it.

~~~
mtgx
You'll have to wait for the APUs, then, which probably won't arrive until
fall.

------
keldaris
I'm someone who's still running an ancient (2007, IIRC) Nehalem i7-920 based
workstation for daily use and was eagerly waiting for this launch to finally
decide on a new build. After reviewing all the day one benchmark data, I've
come to the odd decision that I'm probably going for an i7-7700k, especially
if Intel drops the prices by even a tiny amount.

I say "odd" because, apart from basic daily tasks and occasional gaming, my
workload mostly consists of scientific computing. Ryzen's performance on AVX2
is precisely as bad as I expected from the architecture (effectively, a 50%
penalty) and given the thermal constraints on 6900k's, a lightly overclocked
7700k (usually easy to get to 4.8-5 Ghz after delidding) seems to deliver by
far the best price/performance ratio even on multithreaded workloads as long
as vectorization is a significant component of the workload. Not to mention
the added benefit of delivering the undisputed best single threaded
performance money can buy. Ryzen also seems to have issues with memory
bandwidth that may or may not be solved by future microcode or MB firmware
upgrades, and very limited overclocking headroom.

It's still nice to see AMD bring competition back to the CPU marketplace, and
I'm looking forward to their server CPUs in Q2. I'd also love to be proven
wrong on anything I said above and reconsider my build options, so please
point out any oversights on my part.

~~~
hajile
Are there any benchmarks that show Ryzen at 50% slower on AVX2? Intel does AVX
in one clock cycle, but drastically cuts clockspeeds when AVX instructions are
executing. You'll overclock a lot, but it will then reduce the
multiplier/clock the second AVX appears to keep from killing the chip.

If we assume a similar AVX IPC (and Ryzen 1/2 with AVX2), you'll double your
cores with Ryzen, so the AVX2 throughput per clock would be about the same.
7700 clocks higher, but then downclocks again when AVX are encountered. It
seems like in practice, they are about the same for performance with AVX, but
Ryzen is ahead in every other multi-threaded bench.

[http://www.intel.com/content/dam/www/public/us/en/documents/...](http://www.intel.com/content/dam/www/public/us/en/documents/white-
papers/performance-xeon-e5-v3-advanced-vector-extensions-paper.pdf)

~~~
keldaris
There's already a few benchmarks that are directly relevant, take a look at
the FFTW performance, for instance [1] (sourced from [2]), which consists
mostly of single precision AVX instructions. The 1800X is handily beaten even
by a deliberately downclocked 7700k, which costs a bit over half the price.

But that's only half the story. The thing is, a decent 7700k (after fixing the
thermal interface via delidding and using high end air or low to mid end water
cooling) is actually capable of running at 4.8-5 Ghz without downclocking for
AVX. You might get an unlucky sample and only do 4.6 Ghz, but it's still cost
effective. A 6900k, on the other hand, doesn't have the thermal headroom to go
much past its stock frequencies on AVX-heavy workloads, unless you go full
crazy with sub-ambient cooling.

So, in the end, it looks like the 7700k has at least double the
price/performance ratio of a 1800X in AVX-heavy workloads along with vastly
superior single thread performance. It looks to me like Ryzen is a great leap
in price/performance only insofar as your workloads are highly multithreaded,
yet don't make much use of vectorization at all.

[1]
[http://media.bestofmicro.com/ext/aHR0cDovL21lZGlhLmJlc3RvZm1...](http://media.bestofmicro.com/ext/aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS8zL0QvNjU3MTkzL29yaWdpbmFsLzAxLUZGVFctMUQucG5n/r_600x450.png)

[2] [http://www.tomshardware.com/reviews/amd-
ryzen-7-1800x-cpu,49...](http://www.tomshardware.com/reviews/amd-
ryzen-7-1800x-cpu,4951-10.html)

------
aesthetics1
What is the deal with 16 PCIe lanes? With the move to PCIe storage, this is
disappointing. I am happy to see the price/performance is good (which is
surely going to force Intel to be competitive again), but if this is targeting
the high-end, I can not understand 16 PCIe lanes.

~~~
slizard
The CPU has 24 lanes and the chipset can add up to 8 additional lanes, see [1]
first table.

It's a pity that they did not include at least 32 lanes on-chip to allow two
full x16 GPUs at least on the 1800X, but I hope there is a 1900X in the
pipeline with more lanes.

[1] [https://arstechnica.com/gadgets/2017/03/amd-ryzen-
review/](https://arstechnica.com/gadgets/2017/03/amd-ryzen-review/)

~~~
aesthetics1
This table (in the linked article) shows that each CPU has 16 lanes:

[http://www.anandtech.com/show/11170/the-amd-zen-and-
ryzen-7-...](http://www.anandtech.com/show/11170/the-amd-zen-and-
ryzen-7-review-a-deep-dive-on-1800x-1700x-and-1700/3)

------
msimpson
AMD Ryzen 1800X (Summit Ridge):

3.6 GHz (4.0 GHz Turbo) over 16 threads

15,812 Passmark for ~$500

\-----

Intel i7-7700K (Kaby Lake):

4.2 GHz (4.5 GHz Turbo) over 8 threads

12,321 Passmark for ~$350

\-----

Intel Core i7-6850K (Broadwell E):

3.6 GHz (4.0 GHz Turbo) over 12 threads

14,500 Passmark for ~$575

\-----

Competition is back. Ya!

~~~
floatboth
And that's only CPU prices, don't forget mainboards, which are overpriced on
the Intel X99 side!

~~~
tracker1
No kidding, especially considering I'd like to stay in mAtx form factor...

------
rl3
Out of curiosity, why aren't CPUs with asymmetrical core designs a thing? For
example: a single, large primary core with a high clock speed supported by
many lower-clocked smaller cores. Theoretically that would allow for the best
of both worlds.

I imagine if such a design existed, it would disincentivize proper multi-core
utilization in software to some degree. It would likely further complicate
parallel programming as well.

At the same time, multi-core computing has been a thing for well over a decade
now, and the reality is that a CPU's single-core performance still matters.

Are the efficiency advantages inherent in symmetrical core designs simply too
great? Would asymmetrical designs not be suited for use with existing OS
schedulers? Is operating a chip at multiple frequencies simultaneously somehow
insane from a design perspective?

~~~
xigency
You can look up IBM's Cell design and try to gain some understanding from
former PlayStation 3 game developers if you want to get a background on this.

The major difference is that the PPE and SPE's have the same clock speed, but
it is definitely a heterogenous architecture.

------
tcoppi
They did what they needed to do, get close enough performance and undercut on
price. Should be a winner until Intel makes their next move.

------
kayoone
Seems like gamers in general are a little disappointed, for everyone else this
CPU is amazing at that pricepoint though.

~~~
mtgx
And it's mainly because games aren't optimized to take advantage of 16 threads
- yet. That's partially Intel's fault for keeping 8+ core processors so
expensive.

AMD is trying to change that. Of course 16t games won't be available from day
one of Ryzen launch.

~~~
Orangeair
As a lot of other people online are pointing out, this is pretty much the
exact same thing people said when Bulldozer came out, but it never
materialized. AMD has always won in terms of core count, and people have been
saying for years that games capable of taking advantage of 8+ threads/cores
are right around the corner, but it never really happened. We shouldn't be
basing purchase decisions today on what might theoretically happen at some
point in the future.

~~~
tormeh
They are here now. Almost no modern AAA game recommends less than 4 cores.
Many won't start with less than 4 threads. And they do scale to 8 cores. This
is because of the consoles, which both have 6-7 cores.

~~~
brandmeyer
According to the Steam hardware survey, only about half of PC gamers have four
cores, with the other half having two cores. Granted, many of those two-core
systems will be four-thread systems, but I don't think its fair to say that
the market is dominated by 4+ core users today.

[http://store.steampowered.com/hwsurvey](http://store.steampowered.com/hwsurvey)

------
tutanchamun
The power consumption is really nice compared to Bulldozer:

[http://www.tomshardware.com/reviews/amd-
ryzen-7-1800x-cpu,49...](http://www.tomshardware.com/reviews/amd-
ryzen-7-1800x-cpu,4951-11.html)

[https://translate.google.com/translate?sl=de&tl=en&js=y&prev...](https://translate.google.com/translate?sl=de&tl=en&js=y&prev=_t&hl=de&ie=UTF-8&u=https%3A%2F%2Fwww.computerbase.de%2F2017-03%2Famd-
ryzen-1800x-1700x-1700-test%2F6%2F&edit-text=)

------
daxfohl
Do they ever do compiler speed tests or boot speed tests? Which of the
existing ones would be most similar?

~~~
DannyBee
GCC is part of spec2006, which they optimize for.

But the actual compilers people use (IE not that version of GCC) are rarely
well-optimized for CPU. In fact, most compiler profiles are not flat yet, and
where your compilation takes a long time, it's because some optimization or
algorithm has gone nuts and needs to be fixed.

Until that kind of state of the world changes, measuring compile time is IMHO,
a bit pointless, because they are only artificially CPU bound, and not in a
way that is useful to benchmark (IE while i understand that people use these
compilers every day, and care about performance, i'm just saying that, given a
compiler benchmark, i could still probably make it take zero time with limited
amounts of work)

~~~
brandmeyer
GCC is a useful benchmark not because it is unoptimized, but because it
represents a real-world work profile that is otherwise poorly represented in
SPEC. The working set sizes are large relative to the cache size, and its
composed of a bunch of pointer-chasing branch-heavy logic as it winds through
the various trees and intermediate languages. That profile is quite a bit
closer to the profile of common business and server applications than nearly
all of the other members in SPEC.

~~~
DannyBee
"GCC is a useful benchmark not because it is unoptimized, but because it
represents a real-world work profile that is otherwise poorly represented in
SPEC."

Sure, in that sense, but you also don't want benchmarks that are trivial to
game by performing optimizations to them. :)

~~~
brandmeyer
For SPEC, you can't edit the source code of the unit under test. Only the
compiler and host hardware count. Some components of SPEC have been "broken"
by compilers in the past, by implementing some optimization or other that
suddenly made that particular test code much faster. But you don't ever get to
re-write the benchmark itself, you can't replace a 2^N path with an N^3 path
or anything like that. So these kinds of breakages have almost always been
restricted to benchmarks that are reducible to one or two relatively tight
inner arithmetic-heavy loop(s).

If someone managed to make an optimization that could suddenly make GCC 2x
faster without humans editing GCC's source, that would be a major CS
breakthrough :)

~~~
DannyBee
I'm aware of the limitations on SPEC editing.

However,

"If someone managed to make an optimization that could suddenly make GCC 2x
faster without humans editing GCC's source, that would be a major CS
breakthrough :) "

No, it really wouldn't. For example, if you optimize the line-ending and
tokenization finding in GCC's parser to use SIMD, you would become
significantly faster. Not 2x, but significantly. These are entirely doable by
compilers. " But you don't ever get to re-write the benchmark itself, you
can't replace a 2^N path with an N^3 path or anything like that."

Except, you can, by hoisting calls out of loops, etc. There are also dynamic
memoization optimziations that can be applied, etc.

"If someone managed to make an optimization " This is a common fallacy that
it's ever really one optimization. It's usually combinations of optimizations,
each buying you N%.

I will state outright, having worked on pretty much all of the slow parts of
gcc (according to spec profiles), and having a team of 80+ people working on
compilers, 2x on gcc through compiler optimization is more than within the
realm of possibility.

------
noipv4
The proof is in the pudding. Competition is back. Intel has started slashing
prices of its latest CPUs left right and center.

~~~
sounds
I'm actually curious which prices have changed. A quick google turned up an
article claiming prices were being cut [1] but looking at the full price
history tells a different story:

[http://imgur.com/a/vDHAv](http://imgur.com/a/vDHAv)

It looks like Intel's retail channel partners have a lot of inventory still in
the pipeline. Prices are still within the bounds of the second half 2016. My
best guess is the retail channel partners will be the most hurt by any
slowdown of Intel CPU sales.

[1] [http://www.digitaltrends.com/computing/intel-cpu-prices-
drop...](http://www.digitaltrends.com/computing/intel-cpu-prices-drop-ryzen-
launch/)

~~~
noipv4
[http://www.guru3d.com/news-story/intel-is-dropping-
processor...](http://www.guru3d.com/news-story/intel-is-dropping-processor-
prices.html)

[http://www.microcenter.com/product/472529/Core_i7-7700K_Kaby...](http://www.microcenter.com/product/472529/Core_i7-7700K_Kaby_Lake_42_GHz_LGA_1151_Boxed_Processor)

~~~
sounds
Ok, I added the i7-7700K, but I only see a $10 price drop:

[http://imgur.com/a/vDHAv](http://imgur.com/a/vDHAv)

------
anonymousDan
Does anyone know if the Zen 1800 desktops support their new Secure Encrypted
Virtualization (SEV) extensions or are they only for server class machines
(Naples)?

------
MaysonL
I wonder: is there any possibility that Ryzen is what Apple has been waiting
for for new desktops?

~~~
scott_karana
I doubt it. Their big switch to x86 had their laptops shipping with the new
Core Duos before any other manufacturer, and I suspect they'd have a similarly
dramatic launch if it was true.

------
bitL
I'll definitely consider 1800X for my movie rendering/machine learning rig,
especially if they confirm ECC functionality. 40 PCI-E lanes would be better,
but I can live with 2x x8 GPUs and only one x4 SSD for machine learning.

Interesting would be 4C/8T parts - if they can run them cool (given 8C/16T
1700 uses 65W I think it's not unreasonable), they probably can clock them to
get to 7700k parity at 95W.

------
sengork
Here is another review link which is a good complement to the Anandtech's
hardware:

[http://techreport.com/review/31366/amd-
ryzen-7-1800x-ryzen-7...](http://techreport.com/review/31366/amd-
ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed)

------
BuckRogers
Looks great, coup successful by AMD. Takeaways after a few of the reviews on
this list[0][1]:

\- If you're buying a desktop quadcore processor in 2017, you're doing it
wrong. May as well buy a laptop.

\- Kabylake sucks as a workstation chip but is ok for gaming if your PC is
solely a glorified Xbox.

\- _Vast_ majority (90%+) of gaming rigs are GPU-bound. Think very carefully
before giving up 8 cores and 16 threads at the same price.

\- Intel's margins and lineup are utterly destroyed especially once you
consider how expensive Intel HEDT boards are on top of it all.

\- AMD made CPUs Great Again. Bravo AMD!

[0][https://videocardz.com/66826/amd-ryzen-7-review-
roundup](https://videocardz.com/66826/amd-ryzen-7-review-roundup)

[1][https://www.youtube.com/watch?v=9wJQEHNYE7M](https://www.youtube.com/watch?v=9wJQEHNYE7M)

~~~
mee_too
>> If you're buying a desktop quadcore processor in 2017, you're doing it
wrong. May as well buy a laptop.

Not true, IDEs, web browsers, office tools need single core performance and
not 4+ cores. A laptop is more expensive, will make more noise when loaded, is
harder to upgrade, is less customizable, has less ports, is more expensive to
expand (ThunderBolt devices are not cheap at all). What are you talking about?

>> Intel's margins and lineup are utterly destroyed especially once you
consider how expensive Intel HEDT boards are on top of it all.

There are few people who build workstations for themselves(even among software
developers). HP and Dell haven't announced any Ryzen systems as far as I'm
aware. Right now AMD's new chip will have minimal impact on Intel profits.
Only if AMD can remain competitive on stability, performance and price for a
few years will Intel start loosing major chunks of income and market share.

>> Kabylake sucks as a workstation chip but is ok for gaming if your PC is
solely a glorified Xbox.

PCs are universal machines, you can play games AND do work on the the SAME PC
;-)

>> Vast majority (90%+) of gaming rigs are GPU-bound. Think very carefully
before giving up 8 cores and 16 threads at the same price.

GPU-bound does not mean all CPUs produce the same framerates on a given GPU.

I personally do hate Intel and nVidia monopoly and hope AMD can increase their
market share, but your points are very weak.

~~~
srssays
IDEs are very very easy to parallelise, since most of the hard work is done in
background tasks, which can run simultaneously. Compilation also parallelises
very nicely.

------
throwaway132791
Rather strange that AMD's stock is down ~ 2.5 %

~~~
Smushman
A normal affect of stock price during a major announcement.

It even has a name '"buy the rumor, sell the news"

[http://www.investopedia.com/terms/n/news-
trader.asp](http://www.investopedia.com/terms/n/news-trader.asp)

The idea behind this, simply put, is that if you are in a trade for whatever
reason with a stock that has a major announcement coming; you are pretty much
at the potential peak at announcement time so it is a good time to get out.

~~~
dabadoo
Its not really true. Half the time it goes down half the time it goes up.

------
caf
Did anyone else find the "Simultaneous MultiThreading (SMT)" section of the
article to be long on hand-waving and short on detail? It seems like all the
actual information there is conveyed in the diagram, the text is full of
nearly content-free sentences like _" With each thread, AMD performs internal
analysis on the data stream for each to see which thread has algorithmic
priority."_.

------
djrogers
I'm really interested in these from a virtualization standpoint, especially
with 8 cores for so little $$. I'd love to see a good set of tests/benchmarks
with a completely virtualized workload.

~~~
mtgx
Speaking of which, Zen/Ryzen was supposed to come with Secure Encrypted
Virtualization (SEV), but I don't know if any of these reviews have covered it
yet:

[https://www.techpowerup.com/226719/amds-zen-to-implement-
adv...](https://www.techpowerup.com/226719/amds-zen-to-implement-advanced-
security-features-not-found-in-intels-solutions)

It's possible only Naples will support it.

------
gigatexal
Since I'm not much of a gamer I'm looking at these chips. I wonder what an
optimized (packages compiled with flags specific to the chip) Linux system
around ryzen would be like.

~~~
antouank
[https://news.ycombinator.com/item?id=13774293](https://news.ycombinator.com/item?id=13774293)

~~~
floatboth
Would be nice to also see how long a FreeBSD buildworld takes :)

~~~
gigatexal
That'd be a fun test, too. One would need to control for disk io but
definitely useful. I wonder how many HN readers run FBSD

~~~
contras1970
FreeBSD 12.0-CURRENT on this very laptop.

I'm going to build an 1800X-based box in a few weeks (for games; planning to
run Steam on a GNU/Linux distro), I'll want to see `make -j16 buildworld` for
sure.

~~~
floatboth
Looking forward to that! Also would be nice see -j8 results.

------
gravelc
I'm still trying to work out whether only dual channel RAM means 64GB RAM is
the limit, or do different mobo chipsets support actually support quad-
channel? Kind of a deal breaker (for bioinformatics work) if I can't go over
100GB RAM, which is really annoying given those cheap threads.

~~~
yuhong
AM4 only has pins for dual channel I think.

------
hvidgaard
I'm optimistic, but I'd like to see benchmarks for compiling code. Right now I
buy really expensive 8 core cpus, because multiple threads do speed up
encoding quite a bit. If Ryzen can match that at half the price, it will be
much welcome.

------
jtl999
So we know AM4 supports ECC RAM but will any motherboards come out with IPMI?

SuperMicro has spoiled me.

------
lightedman
Barely half of the PCI-E lanes than my FX-9350.

Nooooooooooooope. Was looking forward to a nice SLI rig but looks like Ryzen
just died for my choice on upgrade. Guess I'll stick with the power-hungry
beast.

~~~
mee_too
8 lanes per GPU is plenty, the performance diff with 16 is ~3%. Games will run
faster with SLI setup on Ryzen. Why do you even need SLI, 1080 Ti will be good
enough for 5K gaming.

~~~
lightedman
Your piddly 5K doesn't stand up to my 8K professional video setup. To boot, I
have multiple drives, and PCI-E/M.2 based networking and storage as well.

Please try again when you need to juggle 1,000+ video feeds simultaneously and
truly understand what REAL power requires.

~~~
MrRadar
If _those_ are your needs you're well beyond enthusiast/prosumer and into
workstation/enterprise server territory. Their upcoming Naples platform will
probably be right up your alley, with octo-channel memory and up to 32
cores/64 threads per socket (though, of course, I'm sure it will all come at a
cost).

~~~
lightedman
"If those are your needs you're well beyond enthusiast/prosumer"

Allow me to introduce you to Camfrog. Absolutely consumer, boss.

~~~
MrRadar
You said 8K video with thousands of streams. That thing just looks like a
normal video chat app that any modern CPU should have no problems with.

~~~
lightedman
No, you need GPU acceleration because you can open as many videos as you can
fit on your screen. I have quad 4K monitors for an 8K setup. I can have a
couple thousand video streams at once. Let me see ANY CPU that does that.

~~~
solotronics
just curious what are you doing with all those video streams?

------
laughfactory
Man this is great news! I can't wait to build a new overly powerful rig with
these new chips from AMD. Sweet!

------
BlytheSchuma
I'll never buy an AMD processor, but I'm glad they are giving Intel some
competition.

------
bdz
Looking at those gaming benchmarks. Turns out the single core performance is
still more important than having more cores.

[https://www.purepc.pl/procesory/premiera_i_test_procesora_am...](https://www.purepc.pl/procesory/premiera_i_test_procesora_amd_ryzen_r7_1800x_dobra_zmiana)

~~~
tutanchamun
what's up with some of the tests? For example this one:

[https://www.purepc.pl/procesory/premiera_i_test_procesora_am...](https://www.purepc.pl/procesory/premiera_i_test_procesora_amd_ryzen_r7_1800x_dobra_zmiana?page=0,20)

A Broadwell with 3300 Mhz (3700 Mhz boost) beats a Kaby Lake (Skylake...) with
4200 Mhz (4500 Mhz boost)? Can the Broadwell use the 128MB EDRRAM from the
iGPU as a L4 cache if the iGPU is not used? Would that actually make such a
difference?

~~~
dr_zoidberg
> Can the Broadwell use the 128MB EDRRAM from the iGPU as a L4 cache if the
> iGPU is not used? Would that actually make such a difference?

That's exactly what it does, and it gives you a "free"* ~20% increase in
memory operations performance (which tends to be a ~20% increase in general
performance).

* not actually free because you paid for those 128MB EDRRAM...

~~~
tutanchamun
Nice, now I'm wondering why Intel doesn't slap 128MB EDRAM on an Kaby Lake i7
4 core 8 thread and then release it for the enthusiast platform x99. Maybe
even select chips to make it possible to release them at 5ghz. Of course with
a large premium (expensive EDRAM etc.).

~~~
dr_zoidberg
The Iris chip (the extra EDRAM for GPU/L4) is seen by Intel as a performance
improvement for mobile solutions (ultrabooks, MS Surface Pro, maybe Macbooks
some day?) and not a desktop solution.

But from a consumer point of view, it increases performance (either for
graphics or for the whole system if I have a discrete GPU), and it can also
help with GPU numerical processing, because you get "free" transfers from
memory (L4) to graphics memory by flipping a bit (or a few bits) in hardware.
Don't know why this wasn't expanded further, appart from the cost of the
embedded RAM.

~~~
clarry
This is what I got frustrated with and decided to stop waiting & buy a Ryzen,
in fact. I've been looking forward to an enthusiast segment desktop chip with
eDRAM and Iris/Iris Pro (or better!) from Intel, for a few years now, but it
hasn't surfaced.

------
vladimir-y
I'm exited that competition returns, no more Intel's monopoly.

------
pmoriarty
Do high-end Ryzen chips need liquid cooling?

~~~
sp332
No, but if they have thermal and power headroom, the X chips will add 100MHz
above the "boost" clock. They call it XFR (eXtended Frequency Range). So
having a water cooler will make it more likely to get that last 100MHz, but I
don't think it's going to make a big difference if you're not overclocking.

------
seoseokho
Finally!

