Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen 5 3600 Beats Intel’s Core i7-9700K in Cinebench (techquila.co.in)
286 points by areejs 27 days ago | hide | past | web | favorite | 196 comments



Since 2000, AMD has often beaten Intel on multi-threaded workloads, but never single-threaded (at comparable clocks). This bench shows AMD beating Intel on both! That's what's incredibly notable here. The last time it happened was 20 years ago with the AMD K7.

Now Zen 2 seems to overall beat Intel on every metric: single-threaded perf, multi-threaded perf, perf/dollar, perf/watt. No matter how you look at it, Zen 2 comes out on top.¹ Very impressive.

Man the folks at Intel must feel the heat.

¹ Except perf/socket when competing with the Xeon 9200, but that's just a PR stunt no one cares about: https://mobile.twitter.com/zorinaq/status/113576693566724096...


You want notable ? AMD is currently beating Intel on price, single threaded perf, multi threaded perf, TDP/power usage at equivalent performance AND on many of the "little pluses on the side" (more PCI-e lanes, ECC supports, ...).

When Zen first came is was a huge deal, but it merely put them as a real competitors, with a decent advantage on many cases, reinforced with Zen+. But Zen 2 put them ahead in almost every category, and in all markets; threadripper and EPYC are just as strong in their areas.

Either Intel has something strong about to appear, or they're going to face a truly difficult few years with customers going AMD now that it's not merely "one generation of chip" that was good. It feels like getting their 10nm working will not be enough by itself.


I hope Intel will at least offer ECC memory for their consumer CPU's now. For now at least AMD are offering the option and will only increase uptake, increasing supply/demand and drive prices down.

I hope that in a few years time that ECC will become standard, as for many it is driving a car without a seatbelt.

This and with security exploits (rowhammer), ECC would be the solution that is a price gap that for many, is still one that can and should be closed.

That all said, it would only take the one or two big mobile manufacturers to go ECC and marketing virtu the whole security and integrity aspect for the rest of the market to follow suit. Which would be a bigger driver in reducing the price gap premium over non-ecc memory. Which is how I see things panning out as mobile phone makers are running out of sell features to add and this would be an easy one for the premium market of phones out there today. Also one in which would go down very well for that feature alone.


> I hope Intel will at least offer ECC memory for their consumer CPU's now.

They do (or did?). My home NAS is running a Sandy Bridge Intel Celeron with ECC memory. Support seems to be randomly distributed thought through the product line[1] though and obviously depends on the motherboard manufacturer to implement it as well.

In general, Intel has a problem with branding. Their product lines are confusing mess requiring looking up each specific part to get a list of features it does or does not support. There's little rhyme or reason to it.

[1] https://arstechnica.com/civis/viewtopic.php?p=22587440


> Their product lines are confusing mess requiring looking up each specific part to get a list of features it does or does not support. There's little rhyme or reason to it.

I got bitten by that back in the Core 2 era. When I built my very last Intel system, around an Intel DG43NB board, I picked up a Q8200, thinking it would be a great bang-for-the-buck chip. Little did I realize from the store display that the Q8200 was the only Core 2 Quad that lacked virtualization support.

The DG43NB died prematurely, due to capacitor plague. I didn't shed any tears for it.


Since Haswell, ECC support has been on (Core-based desktop) Celerons, Pentiums, i3s, and Xeons. It's quite straightforward, when they pulled 2C Xeons off the market they added it to the consumer processors instead, but they still want to retain Xeon sales for the higher-end products so they lock it off on the i5s/i7s/i9s.

Of course you need to know what you're looking at, the suffix of the number matters a lot (eg 7100 vs 7100U), but that is nothing Intel-specific. A Ryzen 2700 and a Ryzen 2700U are very different processors as well.


The issue isn't that somehow the 7100U vs 7100 is some fundamental design decision that didn't include ECC support, its that Intel had a chip where ECC was supported, then disabled it.

Its not because they are "very different processors".


The 7100U is a BGA laptop processor. You're not in danger of accidentally buying a 7100U to put into your NAS.

If you look at the products that are actually compatible with your system, it's not confusing at all. Pentium/Celeron/i3/Xeon = has ECC, i5/i7 = no ECC.


So whats the advantage to ECC and why would you choose to go with ECC over more, faster memory, cheaper? I've seen ECC touted in much marketing, but the usecases to me seem, well boutique, such as video or computer graphics rendering.


I'm running 64 GB ECC with a Ryzen system. As it turns out, memory in general was just exorbitantly expensive in the recent past, so the ECC premium wasn't all that much. On the flip side, I don't do PC gaming so having faster memory didn't really matter much; current ECC was already faster than the 5 year old system I was upgrading from.

For me, the reason to go ECC was just to prevent silent file corruption. With 64 GB, the math is in favor of me seeing bit flips. Moreover, I tend to put my machine to sleep, rather than shut it off, which also increases the likelihood of memory errors. I wouldn't say I have a highly specialized workload outside of the occasional VM, consisting of large files, for development. A lot of it is I just didn't want to deal with silent corruption of family photos & videos, even if the underlying file formats are pretty resilient.

Personally, I think the situation should be flipped. Everyone should run with ECC these days and only run without for specialized environments like gaming where you want to squeeze out every FPS you can. Faster memory isn't going to matter for most situations.


Protection against memory errors, I don't have exact odds of such errors, but certainly not negligible and with faster memory and capacities - those odds only increase. Many such errors will go unnoticed and the impact for most will be unknown, but can equally be very impacting. Hence today it is only critical systems that have such use, as can justify the extra cost.

But for everything else, it's a scale of needs, sure the low-end would be a gaming console or graphics memory, but if that gap gets smaller, uptake and usage will scale as that advantage becomes cost effective at various consumer levels.

But having the option of choice, is and always been a huge plus for any consumer - AMD makes that choice far more accessible than Intel. Though I do see a mobile phone maker going ECC as the turning point in uptake and making that price gap more palatable in the end.


If you value stability and no file corruption, you'll go for ECC.

If you use it just for media consumption and playing games, anything goes.

A good power supply and ECC RAM make a very stable PC.


If you have ever had a corrupt file on your hard disk, ECC memory can help prevent that. Usually data corruption happens in memory (and then is saved to disk).


In short, stability. If the performance hit is low double digits, that may not even be noticeable for many use cases.


Reliability.

Whether one cares about that is one's own tradeoff.


Intel actually offers ECC memory by chipset, not CPU. If you get a board with the server-oriented chipset, it will often support a Core i5.


No ECC is supported in that configuration. If you're lucky, it works as normal RAM.

A few i3 CPUs support ECC, though.


I think it might have worked in some earlier generations of Core i, perhaps Sandy Bridge and Ivy Bridge.


There are some. But none in the desktop market segment. For embedded use.

All desktop segment chips with ECC have i3 designation.

See:

https://ark.intel.com/content/www/us/en/ark/search/featurefi...


And the motherboards from two years ago (AM4) will support the latest Zen2 / Ryzen 3000 series CPU[0]. Meanwhile intel changes the socket every generation which adds another $100-$300 to the cost of a CPU upgrade.

[0] double check your board manufacturer for bios updates to support new CPU, my cheap B350 board does.


Helpful link for knowing which motherboards are able to get their bios flashed to support Zen2 without an installed CPU/memory:

https://redd.it/bvfo57


This is unfortunately not helpful to those of us who haven't made the jump to AMD yet :(


Well the good news is you can pick up a very inexpensive board now, even a used one, and it will be compatible with the latest CPU. I bought my motherboard July of 2017 and will be upgrading my CPU to a Ryzen 3000 after 7/7 launch.


It's been 24 years since you could use an Intel or AMD processor on the same motherboard and some people just can't get over it.


It's currently Intel's turn to have Jim Keller. They'll figure something out.

https://en.wikipedia.org/wiki/Jim_Keller_(engineer)


Everyone likes to look at individuals to solve problems when it's all about the team. I imagine the internal politics at Intel are much worse than at AMD but that's just a guess.


Generally I would agree, but in this case its Jim Keller we are talking about.


I mean you might be right but looking at the lead times for a new CPU architecture I guess he will start making a difference in about 3-4 years from now.


Which would still be good enough for Intel, considering their current market share. They'll have a couple bad years but Intel will still be on top if it takes them four years to beat AMD again. Many businesses and OEMs didn't even have AMD on the radar anymore. I'm still waiting for a suitable dev laptop based on ryzen.


Have a look at the ThinkPad. Lenovo has fully embraced Ryzen in desktop and laptop.


Interesting, he even popped by Tesla on his journey: https://web.archive.org/web/20180426124248/https://www.kitgu...


Pretty amazing to see him going straight from a a B.S. into designing processors and fairly quickly becoming the head engineer on cutting edge processors.


It looks like Intel completely messed up 10nm. Nothing new since 2015...

My understanding is that they pushed multi-patterning a bit too far and they have too much defects. 7nm, which use a different tech and is developed in parallel seems to get better.

So if Intel has a big thing in the making, I expect it to be 7nm, not 10nm.


Don’t forget... security ;)


> Either Intel has something strong about to appear

People are hypothesizing this to be true given that Apple is putting Intel rather than AMD chips in the Mac Pro, and Apple doesn’t usually make dumb purchasing decisions, but does sometimes have private access to product roadmaps.


There are a lot of reasons for Apple to choose Intel.

Apple has a lot of optimizations for Intel at the moment from the instruction set down to the motherboards and chipsets. A great example is all the work they do in undervolting mobile chips so they perform better (when the latest MBPs shipped with this disabled, everyone really complained). Re-writing all that software definitely has non-trivial R&D costs.

When making a new motherboard design, a ton of stuff simply gets reused and moved around. Switch to a different chipset and you start all over for a lot of stuff. Even if AMD were 10-20% faster overall, their current "fast enough" Intel chips would still win out.

AMD's zen+ 3000 mobile chips don't compete with Intel in single-clock performance, clockspeeds, or total power draw. With the exception of the mac pro, Apple's entire lineup uses mobile processors. In addition, Intel probably gives amazing discounts to Apple. Zen serves them best as a way to squeeze out an even better deal.

A final consideration is ARM. Given the performance characteristics of A12, Apple most certainly has their sights set on using some variant in their laptops in the not-too-distant future. They already run their phone chips in their macbooks as the T2 chip. They are probably working on the timing to allow those chips to run more than the touchbar and IO.


Pretty good analysis overall, although this part is not true:

> With the exception of the mac pro, Apple's entire lineup uses mobile processors.

In fact their entire desktop lineup now uses desktop-grade CPUs. (except possibly some entry-level iMacs that weren't subject to the recent refresh)

  * iMac Pro: Workstation processor (Xeon)
  * iMac 27": Socketed desktop processor (e.g. Core i9-9900KF in top config)
  * iMac 21": Socketed desktop processor (e.g. Core i7-8700)
  * Mac Mini: Soldered embedded desktop processor (e.g. Core i7-8700B)


Yeah, it looks like they switched over to desktop chips around 2017 (they still use laptop memory though -- except for imac pro).

An i9-9900K has a 95w TDP, but Anandtech puts the real load number at around 170w TDP. I've seen people undervolt these down to around 110-120w in the 4.7GHz range. I imagine Apple's dynamic undervolting and custom motherboard can shave 10% or so off that total while their dynamic undervolting can go much lower with fewer cores and lower frequencies. While even that isn't going to make their tiny cooler keep sustained loads from throttling, it could get much closer.


By "laptop memory" you mean SO-DIMMs; the only significant difference to full-size DIMMs is, well, size. Voltage and frequency tends to be the same, leaving aside extreme overclocker's RAM.

In c't magazine's review of the current iMac, they found that the whole machine appears to have a power limit which is shared by CPU and GPU, so yeah, Apple are definitely doing something fancy in that regard.


You’re talking about mobile here, and I get why—it’s a majority of their computer sales—but most of these arguments don’t apply in the case of the Mac Pro. A Xeon and an Intel mobile processor are different-enough chipsets that there isn’t much motherboard silicon that can be reused between them. (They could maybe reuse the chipset design from the iMac Pro, but Apple kept saying that was a “transitional” design—which I read as “an evolutionary dead-end stop-gap product that we aren’t basing our future design thinking off of.”)

Likewise, I do agree that Apple is likely switching to ARM for mobile—but are there any ARM cores on anyone’s product roadmaps that could power a Mac Pro, or even an iMac? Nah.

I do agree with the greater point: Intel probably do have an exclusivity agreement with Apple right now, and so Apple sticking with them right now isn’t evidence of anything in particular.

But to me, it looks like a natural shift for Apple, in the near future, to adopt a hybrid strategy: if they can switch to ARM entirely for their highest-volume segment (laptops et al), then they’ll no longer need the benefits of having Intel as a locked-in high-volume chip supplier, and will thus free to choose anyone they like for the much-lower-volume segment (desktops/workstations) on a per-machine basis. That might be Intel, or AMD, at any given time. They won’t get such great deals from Intel, but in exchange they can play Intel and AMD off one-another, now that they’re in healthy competition again.

Intel is probably just as aware of what Apple has on its roadmap as Apple is aware of what’s on Intel’s roadmap, so I would expect, if anything, that they’re scrounging desperately around for a mobile architecture that’ll be competitive-enough with the nascent A13 to stave off that collapse of a partnership.


My prediction is that within 5–10 years, we'll be seeing ARM/Apple CPUs at the core of their laptops, low-end desktops, and even their top of the line Mac Pro. Powerful x86 CPUs will be still be available in the Mac Pro and implemented as an "accelerator" card.


another large consideration is availability of chips. Can AMD spin up enough production quickly enough to handle Apple, on top of their own sales, and their soon to be ramp up for new xbox and ps consoles..


Or it's just inertia. The Mac Pro (and every Mac computer) have always had Intel processors.

It's true that quite a few have had AMD GPUs, and they made the more difficult PowerPC to Intel switch back in 2006 with OS X 10.4. But it would be a significant effort to change a processor partnership more than a decade old. No Apple developers have anything but Intel in their machines; it's not just an item on a BOM.


> No Apple developers have anything but Intel in their machines

Given that Apple almost certainly has a research lab maintaining machines running macOS on top of Apple’s own ARM chips, to watch for the inflection point where it becomes tenable to ship laptops running those chips; and thus, given that Apple already has a staff for that lab whose job is to quickly redo macOS’s uarch optimization for each new A[N] core as Apple pumps them out; it doesn’t seem like much of a stretch that those same people would do a uarch optimization pass every once in a while to answer the “can we use AMD chips on desktop yet?” question, does it?


>People are hypothesizing this to be true given that Apple is putting Intel rather than AMD chips in the Mac Pro, and Apple doesn’t usually make dumb purchasing decisions

Apple wouldn't care that AMD narrowly beat Intel for 1-2 chip generations either. They'd care about AMD's ability to produce chips at large enough volumes and keep the pace going forward.

They've been burnt by Motorola before and Intel now, to just jump on a short-term bandwagon.

If AMD manages to keep this up (and ramp up their production) for 5+ years, then they might have a chance with Apple. But again, Apple is more likely to go for their own ARM based chips in 5+ years...


>Apple wouldn't care that AMD narrowly beat Intel for 1-2 chip generations either. They'd care about AMD's ability to produce chips at large enough volumes and keep the pace going forward.

I think there's no reason to bother switching to AMD if/when they plan on moving to their own ARM based CPU within the next few years.


The Adobe Suite, which accounts for the work of a LOT of Apple users has some significant issues on AMD. Not that they couldn't/shouldn't fix them, but it's Adobe. This is probably a very large part of the issue. Beyond that, they're using custom board designs that have significant lead time, and changing platforms isn't easy for an OEM.

Not talking down about prior motherboards, but the next run will have some very high end designs and features compared to prior gen Ryzen as well. I'm really looking forward to upgrading in September/October. Looking at a 3950X unless a more compelling TR option gets announced before then.


Or there's a long term exclusivity agreement. Who knows.


I wish AMD would release something close to the NUC now :-( only thing keeping me from buying them is my disinterest in making large desktops anymore. I'm over the windows cases, water cooling, or jet engine desktops.


Have you seen the Asrock Deskmini A300? It's not quite NUC sized but it's very close. I think it would fit the bill for you.


> The last time it happened was 20 years ago with the AMD K7.

K8, too. AMD's IPC was so far ahead of Intel's at that time it was crazy. 2.2GHZ Athlon 64's were keeping up with or beating the 3-3.2ghz P4's.

It was so strong & competitive Intel resorted to straight up bribery to compete, resulting in many anti-trust judgements against it as a result. But they succeeded in preventing K8 from hurting their market share, and kept AMD down despite a vastly superior product. Here's hoping that doesn't happen again this time around, but maybe Intel will decide the wrist slap is worth it.


I think that's unlikely. AMD has to be ready to deal with it somehow.


There's one thing left that I have both doubts and excitement for:

How good are AMD's laptop chips? The improvement in IPC and efficiency in Zen 2 can go a long way in improving this, and then, of course, they must improve perception.

Anecdotally: I've owned almost exclusively AMD chips in my desktop builds for the past 15 years. I've never once owned an AMD laptop. When Intel built the Core line of chips, they seemed to nail laptop first, and then apply the efficiency to their desktop line with higher clocks. It worked wonders.

In my opinion, AMD really needs to nail laptop CPUs/APUs now more than ever. I hope they do!


AMD's current laptop lineup consists of 12nm APUs. They will switch to 7nm next year, in the 4000 series.


Given AMD's recent success in engineering and product decisions, I think they'll overcome their previous laptop processor shortcomings. I'm optimistic. I don't think the process node was the primary driver, though. I think they just had some lingering issues with sleep states, etc. that need to be spot on to ensure excellent battery life.


Chances are I'll be upgrading to a Ryzen 3000 this year. Now that I think about it the last time I ran an AMD CPU was indeed the K7.


I've gone back and forth a couple times... currently 4790K, before that FX-8150, before that a first gen i7-860, before that and AMD XP/X2 ... Now looking at the r7-3950X.

Of course, this is the longest I've held out on upgrading, and getting itchy about it...


iirc, amd was beating intel on single thread in the pentium 4 era with way lower clocks.


The FX-60 from 2006 I think was the last AMD CPU that held this crown. 2.6Ghz and it beat Intel's top chip at 3.5Ghz. The only thing on the market with higher single threaded performance was their own FX-57 which was single core with a slightly higher clock speed.

https://www.anandtech.com/show/1920


I still have my old FX-60 in an antistatic clamshell. I just could never bring myself to get rid of it, I loved that processor so much. That and XPx64 held me for a very long time.


That's when Intel "let's go faster" made them meet the 4Ghz barrier head on for the first time. More than a decade later and we're still pretty much there.

One has to wonder how things would have gone if they didn't have the Core-M architecture on the side back then.


Intel knew that the Netburst Architecture was power hungry so it had established an Israeli team to develop a mobile architecture in tandem. I believe they started with Tualatin or another late PIII design and optimized the hell out of the microcode. I believe that became Banias and they continued to bring in aspects of Netburst to produce the precursors to the Core line.


Yeah that's why they invented the nomenclature such as "AMD Athlon 3200+". It was equivalent to a 3.2Ghz pentium in performance.


netburst was rapidly killed, nobody hid that it was a failure


Its death wasn't rapid.

RDRAM, though, that was rapid. But that was only a fraction of Netburst's problems.


Netburst lasted for 6ish years before it was not intel's premier product, and then another 2-3 afterwards in various forms.


Really.. my memory is bad then. I thought it was 5 years including a 2 years lingering period.


> Man the folks at Intel must feel the heat.

Well, their processors are very effective space heaters, so yeah.


I'm guessing you missed the comments regarding performance per watt?


You mean AMD beating Intel, therefore being less effective space heaters than Intel's own? Yeah, I saw those.


This is great news for consumers. I've always wanted to build another pc based on AMD chips and it looks like Zen 2 is going to be it.


I'd argue AMD and Intel were pretty close in single-threaded performance until the Core 2 Duo (released in 2006).


You'd be wrong. Athlon 64 destroyed the Pentium 4 and the Athlon XP model names were originally meant to be an equivalent to how fast the lower-clocked AMD cpu would perform to the speed of a Pentium 4 clocked at the same MHz as the model name.

Here's a top of the line Intel Pentium 4 3.46 Extreme Edition being unable to compete with AMD processors running at 66% of the clock speed of the Intel.

https://www.anandtech.com/show/1529

I really don't count Core Duo, even though I had a Macbook equipped with one. The Core 2 Duo was Intel's first real competition to Athlon 64.


I'm happy that AMD really seems to be surging again. It was finally time for the ol' mobo+cpu+ram rebuild for me recently, and I ended up replacing my second generation core i5 2500k with a new Ryzen 5 and I could not be happier. The price was spot on compared to how much more I would have paid for an intel chip and board. I believe I got the Ryzen 5 2600, so one generation back from this 3600 I assume.


Did a 2600x / Vega 56 build recently, and I'm honestly impressed with the value. Total under $1k after tax. I would have spent that before getting a GPU if I had gone Intel.


I went from a 2500K with HD6950 to a 2700X and a RTX2080.

That was a slight jump.

Yep the 2600 is Zen+, 3600 is Zen 2, decent uplift in IPC (about 10%) on its own not worth upgrading.

That 3900X looks tempting though to replace my 2700X.

50% more faster cores than an already very fast processor, AMD are on fire at the moment.


Since nobody yet mentioned it, just going to remind everyone that a 16 core, 32 thread desktop chip for Zen 2 will exist, and be priced under 800 dollars.


Source on that <800 dollar figure? Is it all just leaks?



The 2950x is the existing 16C/32T and retails for just under 800 USD right now. I don't see why the next gen product would cost significantly more for similar performance.


Sure. But two memory channels is not amazing. I'll hold out for ThreadRipper, with 4 channels. Although AMD Rome Epyc got massive 8.


They will surely push 12 cored ones first. They glue 2 downgraded dies and get ones CPU that costs more than 2 defective dies sold separately.

Just brilliant


I don't think binning chips is anything new.

If it performs well and lasts, I don't care if it's nothing but goats blood and binder twine holding it together.


The trick here is that they reversed the paradigm and managed to sell two downbinned dies for "more* than their combined retail price if sold as standalone CPUs.


The market charges based on perceived value, not cost of materials. You aren't going to be getting a 12-core Intel chip for $500, that's for sure.


They are 8 core dies of a higher bin that would have been clocked the same (or possibly lower) in an equivalent 4 die Epyc part. Given the fact that it is a higher bin, they are rare.

What part of that illustrates they're defective, or that it legitimately costs more than if it was sold separately?


Very likely 6 cored dies simply have 1 or 2 defective cores that were either damaged during manufacturing or failed test.

As for frequencies, modern day finfets can clock way way higher than planar devices, and are more limited by thermals than anything else.


That's the fairy-tale but most chips come off the line with all cores functional and the manufacturer disables some of the cores. Taken to the extreme - there just aren't that many chips with, say, 4 broken cores and yet the rest of the chip still works. Demand is highest for these cheap parts and supply is lowest, so you disable cores.

The other thing is, these 12Cs are actually clocked significantly higher (and have much more cache enabled) than the lower-end parts. So these are not simply "bad silicon". Really this is a misnomer in general since there is a variety of ways silicon can be "bad". Bad clocks, damaged cache, damaged cores, etc etc.

Binning gives you the opportunity to pick the best cores on a chip too. So if a chip had two cores that could only do 4 GHz but the rest of them could do 4.4 GHz or whatever, then all of them might be functional yet it might fail binning as a fast 8C chip, but you could disable the two derpy cores and ship it as a very fast 6C chip.

The binning process is a deeply trade secret kinda thing, but is undoubtedly much more complex than people generally believe. It's not "top X% silicon becomes Epyc, next X% silicon becomes threadripper", etc. All that is known for sure is that AMD is very efficient at using every part of the buffalo.


Clock ceiling became largely irrelevant with finfets, even most low end zen dies were hitting 3-3.5ghz when overclocked. Downbinned dies are clocked higher because they have more thermal leeway with 2 cores disabled.


Anyone have any experience with AMD and ECC memory? I was going to do an AMD build for a NAS but the support for ecc was kinda like:

“We enabled it but didn’t test it and we don’t guarantee it will work.”


Works fine here with 1st Gen Threadripper on ASRock X399 Taichi, running Linux. Some points:

* Not all motherboards support ECC, make sure to check before buying.

* Only unbuffered DIMMs are supported by Ryzen/Threadripper.

* According to motherboard vendors, Ryzen APUs do not support ECC. Ryzen Pro APUs do support ECC but those SKUs are typically not available in retail.


I believe asrock are the only manufacturer that have come out and said yes we test that ecc will work it's just not a supported config. The others don't quite commit that far.

I have an asrock + unbuffered ECC ryzen 5 1500 setup. I've checked windows does indeed detect errors when induced.


Curious how you induced errors?


The common way is to reduce memory voltage or tighten the timings until things start failing. Alternatively, you could probably run rowhammer PoCs.


ECC is also working fine for me on the X370 Taichi with a first-gen Ryzen 7.


Check the motherboard specs, they are a bit hit and miss when I was looking.

I am using an ASROCK - B450M Pro4, Crucial - CT16G4WFD8266 16GB ECC with a Ryzen 5 2600.


I'm genuinely curious, for a Nas does ECC RAM provide any benefit? I've always just used old desktop hardware, typically from about 4 generations prior (when I upgrade the desktop, the old hardware goes into the lab PC, and the lab PC goes into the NAS). I know there are power consumption benefits to the specialized NAS boards, but would I see any benefit from ECC? Considering the NAS is a RAID array of spinning rust drives, a flipped bit in RAM really shouldn't affect anything seriously. Or am I grossly mistaken?


When you write something to the NAS, it'll land in memory before it's written out to the RAID. If you get an undetected bitflip in that memory before it's written out, it'll happily replicate that error across your RAID.

Worse could happen if a RAM error corrupted the in-memory copy of filesystem metadata (like an extent map) - subsequent writes that rely on that metadata could then cause gnarly corruption.


If that bit flips in RAM before being written to disk, you now have a flipped bit in your file when it gets put on disk. Depending on when this happens, even a super-resilient file system like ZFS may not notice the corruption if this bit flip happens before checksumming the block.


If you're running a decent size NAS with ZFS datastores, not running ECC is borderline negligent.


Upgrading one link in the chain while leaving another link alone is not borderline negligent, and people need to stop saying that.


What about the computer over the network sending the data to the NAS? It won't ever bitflip in memory?


Usually it works without problems. Check with the mainboard manufacturer if it supports ECC memory. Most do, some don't.


It works, but does it “work”? As in, does it error correct? Supposedly errors are so rare I’m not even sure how you could test this.


Yes it does actually work. The easiest way to intentionally trigger memory errors is overclocking.

https://www.hardwarecanucks.com/forum/hardware-canucks-revie...


Rowhammer comes handy ;)


Some people overclock the memory. It'll make errors at some point.


If you look closely at what AMD says it might make you question it. They specifically say they don't disabled it on their desktop chips but they don't test it. Basically a YMMV statement.


That's our impression, too. There are AMD Motherboards that take ECC memory, but we've never seen them act on ECC errors that were uncorrectable (the correctably errors are handled, but uncorrectable errors aren't reported!).

We will only use Intel Xeon for our work because of this. You'll get about 1 bit flip/GB/year. With 128 GB or more in our standard builds, this would be more than 2/week. We just can't have that uncertainty in the data we provide.

And while Cinebench is a useful benchmark, all our heavy number crunching is done on NVidia 2080 architecture so the fact that AMD may have an advantage on some cases isn't that interesting for us. Perhaps if you're a gamer, who doesn't care about an occasional bitflip, looking to squeeze the last drop of value out for his dollar....


If you want to compare to Xeons, the comparison aren't the consumer CPUs this is about, the AMD equivalent are the Epyc CPUs.

AMD doesn't disable ECC support entirely on consumer CPUs like Intel does, but as far as I know it's also not officially supported and guaranteed to work, it's up to the mainboard vendor how to handle this. In the Intel case you simply can't get ECC with non-Xeon CPUs.


Well even on intel, its possible the firmware/os isn't doing the right thing. This was pretty common ~10 years ago, when the default linux behavior wasn't to report soft errors in the logs (due to missing drivers/whatever) so a lot of machines might just sit there and correct the errors and the only way to find out was to turn up some BMC/etc logging. I guess that is why you should buy machines from HP/Dell/Lenovo that are fully certified for your OS rather than random whitebox manufactures too, although given the problems I was having with HP equipment at the time its questionable.


> can't get ECC with non-Xeon CPUs

Not quite. There are some Core branded CPUs that support ECC, including funnily enough the i3's.


> There are some Core branded CPUs that support ECC, including funnily enough the i3's.

Hell, there are Celeron and Pentium chips that they have it enabled on. Not because they expect desktop users to buy them, but because it allows them to keep their Xeon brand premium while letting OEM's like Dell advertise the T140 "starting at $549" (in a configuration nobody would ever want to buy).


It depends on the use. For a home server or even a small office fileserver, you don't need massive threading capability, and in fact some of those low-core-count parts are fairly highly clocked, which makes them faster.

For example in the 7000 series, the i3 7100 has a 3.9 GHz base clock and you have to go almost to the top Xeon (the equivalent of an i7) to get anything equivalent. And even then it's a turbo, not a base clock, so in principle the motherboard should not let you turbo forever (PL2 time limit may actually be enforced on a server chipset).

Also depending on workload you may not even be able to exploit an increased threading capability anyway, without 10 GbE on the box, or link aggregation capability.


Oh, the i3's are fine for a general small business workload - compared to the socket-compatible Xeon's all they're really lacking is extra PCIe lanes if you need them. That's ultimately what Intel uses to segregate the Xeon and HEDT chips from their mainstream platform, after all.

The Celeron and Pentium chips that have infiltrated entry-level servers are absolute trash though.


There are even Atom chips with ECC: https://ark.intel.com/content/www/us/en/ark/products/97935/i... (for I assume the NAS products that use these?)

It seems like it's mostly any chip that would compete with the Xeon-W gets ECC removed.


There are no Epyc workstations chips with clock speeds comparable to Threadripper/Xeon-W. At least for the currently released products. And thus I consider Threadripper the closest competition to Xeon-W, not Epyc. AMD also lists ECC memory support as a feature for Threadripper.

That said, the new Xeon-W series has more memory channels (6) and supports more RAM (up to 2 TB) than any existing Threadripper product. I.e., AMD doesn't have an equivalent product for all use cases yet.

However, we don't know the Zen 2 Threadripper lineup and the frequencies for the different Zen 2 Epyc SKUs are also not public yet. AMD could release Threadripper with support for RDIMM/LRDIMM or Epyc chips with higher clock speeds to better compete against Xeon-W.


Supermicro lists tested ECC memory for their AMD mainboards. It would be very strange if that did not work.


The "average" in no way reflects the reality of any given machine. I've been running ECC ram in my NAS boxes at home for 20+ years (I put ECC on a AMD K6-II.). Not once have I seen any of those machines ever report correctable errors (outside of testing to inject errors I usually perform before putting them in service). Similarly at work i've had the opportunity to pull BMC/etc logs from a lot of machines over the past decade or so. The vast majority of machines never report any errors. Really rarely a machine will crop up that will report a soft error on some longer cycle (say every 3-5 weeks). Probably roughly at the same rate there are the machines that have obviously failed in some way. They go from functional to hard errors pretty much overnight, with some generally < couple days of warning where the soft errors were being corrected.

Both cases are hardware errors of some form because usually swapping ram/motherboard/powersupply/etc will clear it up.


You've _never_ seen a correctable error reported in 20 years? I think your AMD motherboard isn't handling these errors right. That's much more likely than you've never had 1 in 20 years.

See google's study: https://static.googleusercontent.com/media/research.google.c...


Not on the chain of hardware I run at home (the machines with ECC are ones I spec and configure very conservatively), on other larger collections of machines, sure...

I've seen googles study, and out of the few thousand or so machines I've had statistics collections from, the few machines with soft errors were fixable and stopped reporting soft errors after having something swapped.

The google study itself goes on and on about the variability of errors with such wonderful sections as "These numbers vary greatly by platform. Around 20% of DIMMs in Platform A and B are affected by correctable errors per year, compared to less than 4% of DIMMs in Platform C and D."

The paper really leaves a lot of holes, I don't remember (nor do I see after skimming it) any note of how aggressively they are running the ram. Did they say try to reduce the ram timings/bump voltage on the platforms they were having issues with? Did they compare how mature the technology was when the commissioned it? Did they try to diagnose the machines reporting high error rates by seeing if they could convert a machine with a high error rate to something lower? They do spend a lot of time talking about temp though. The only valid conclusion I think can be drawn from the paper is "ECC is important use it because you will have RAM failures, better to know about it than not".

To me the paper speaks to googles diagnostic/repair system more than anything. I took a proactive approach and replaced DIMMs/Motherboards/Powersupplies/etc that reported correctable errors. When we were self supporting we would swap the questionable parts into other machines to see if the failures would follow them in an attempt to see if we could prove a failing part was marginal. Then return/exchange it if it failed in more than one machine.

I've seen a lot of different failures over time, and when I was partially in charge of designing/picking platforms I even managed to find actual design bugs a couple times that caused low rate error rates (not in the RAM subsystem thankfully). I tended to use the "any kind of failure when run normally is instant disqualification" metric when I was initially picking new platforms before buying them to put in production. I would never have qualified a platform that had a 20% DIMM failure rate. (well at least not purposefully, we got some stinkers but we tried to correct our mistakes).

Given what i've heard of google, i'm not sure I would really extend these reliability metrics unless your buying the latest bleeding edge parts and running them well into their design margins. These days its pretty common to design systems that have error correction and push the physical topology to the point where there is an expectation of a pretty solid error rate (think SSD flash chips). So for a company like google pushing the RAM timings/etc right out to the margin where they are experiencing a low but statistically unlikely error rate would seem to be the right thing to do. Its different if your a bank/etc running financial data. In that case you buy for reliability first.


Yea the new Xeon E-21XX's are proving hard to find in the consumer market. Most places are back ordered or just place orders directly with intel after you buy. The scalpers are changing $50+ or more over retail on Ebay and 3rd party Newegg/Amazon sellers.


> There are AMD Motherboards that take ECC memory, but we've never seen them act on ECC errors that were uncorrectable (the correctably errors are handled, but uncorrectable errors aren't reported!).

That's incorrect. Uncorrectable errors are properly reported to the OS. Wendell from Level One Techs has tested this: https://www.reddit.com/r/Amd/comments/b1qmgy/ars_technica_th...


I don't understand. At that rate your chance of having a second bit flip before the first is fixed is almost zero, and the chance of hitting two separate bit flips in the same row is ridiculously small.

If you're worried about a single event causing two flips in a single row... I suppose that's possible, but it could also cause three bit flips. So a Xeon has a non-zero error rate. Is Ryzen meaningfully worse?


I think his argument is that you will get bit flips, ECC is just going to report and/or correct them. Without it, your hoping the bit flips show up somewhere you can detect them (application crash/etc) rather than silently chugging along and ruining your results/data/whatever.

I had the chance a long time ago to work on a product that as a side effect was corrupting system memory... Think of it as a kernel module that picks a random number between 0 and MAX_RAM and flips a byte. Its truly amazing how many of those can happen before there is any visible evidence something is wrong.


You're talking about ECC vs. no ECC. That's not what the comment was saying, it was saying it handled single bit flips correctly but not double bit flips. But at 1 bit flip per GB per year, randomly distributed, you are guaranteed many single bit flips but a double bit flip is almost never going to occur.


Any plans for AMD to release first party NUC form factor devices with Ryder chips? I bet I’m not the only one that both loves that form factor and also amazed at the perf numbers and core counts of these CPUs.


ASRock's Deskmini A300 is as close as you are getting anytime soon. It's a nice machine but not quite as small as the NUC at 155 x 155 x 80 mm.

Though note that the chip in this article is not going in a NUC form factor because it does not have integrated graphics. The Ryzen chips with integrated graphics top out at 4 cores and the new ones are still based on the older Zen+ microarchitecture.


You can already build small form factor PC's but there are two major issues.

First is that you need a gpu, so the chips just announced aren't going to work.

The second is heat dissipation. It's easy to think any heat sink and fan will work, but the reality is that an integrated gpu and dynamic clocking can generate a lot of heat. The computer will still function, but can end up laggy and much less capable because it is being throttled.

Then there are bios options to be aware of. Dynamic overclocking generates a lot of heat for little gain. The defaults also throttle too soon. Relaxed temperature throttling runs hotter but gives a much better experience. CPUs run just fine at 50C anyway.

This is all to say a small PC is possible now and a NUC might be possible later, but they are still tricky to really get right.


Not sure if mini-itx fits your taste but epyc 3000 may be an option. https://www.servethehome.com/supermicro-m11sdv-8ct-ln4f-revi...


M-itx cases are about 7.5 liters at the smallest (XBox size) although some specialty cases exist in the 5-6 liter range; a NUC is typically about 1 liter.


Those cases tend to have full-size GPU support. You can find smaller cases for APU usages, like this one: https://www.supermicro.com/products/chassis/Mini-ITX/101/SC1...

But you can also find Mini-STX boards like this one: https://www.sapphiretech.com/en/commercial/amd-fs-fp5v which should then fit in something like this: https://www.silverstonetek.com/product.php?pid=708&area=en

You can also find some very small Mini-ITX cases on things like aliexpress: https://www.aliexpress.com/item/32994653418.html?spm=2114.se...


Here's a company using the V1000 line(low power Zen+Vega APUs) for a fanless chassis.

https://tranquilpcshop.co.uk/mini-multi-display-pc/


I did a mini-ITX build last year with a 2400G inside an InWin Chopin case (https://www.in-win.com/en/gaming-chassis/Chopin). Definitely not as small as an Intel NUC, but it's also a lot more powerful. It can also be upgraded, unlike the NUC.


Intel nucs use lower powered U-series mobile CPUs, which AMD isn't so great at. Hopefully soon though.


What I really want is an industrial PC with these boys for our robots.


I'll wait till the embargo is lifted and detailed benchmarking will be available.

The date is rumored to be the 7th of July.


It's not even a rumor. They announced the date.


Calling it a rumor makes the knowledge sound more exciting.

Similar to how product details are no longer "announced" or "released", but rather "leaked".


I must say, I've been quite happy with my now "old" 7/1700. Between it and the NVMe SSD, it's been quite a snappy machine. Even before Zen, I preferred AMD for my builds just for the bang-for-the-buck. It's great to see that they're cranking up the performance per dollar to new heights.


Is it known yet whether the 32-core Threadripper will be discontinued or whether a potential 32-core 7nm Epyc will run at higher clock rates?

I'm searching for the fastest C++ compile machine, that includes linking (which is single core).


The default answer is the new Threadripper - which doesn't have a confirmed release date yet, but should be out within the next 3-6 months. However, if you really care about optimizing for that workload, the exact answer depends on the codebase(s) you're dealing with because the time spent linking varies significantly.

For instance, if you're working on a large C++ codebase with incremental compilation, rarely do full recompiles and heavily rely on link time and profile guided optimizations, I can see a case where a heavily tuned Cascade Lake X makes a lot of sense (overclock the mesh, tune the memory latencies and get enough cooling to sustain the single core boost as high as you can). It's actually one of the few usecases (AVX-512 being another) where I still see a niche for Intel CPUs purely based on technical merit... saying that still feels weird, but that's where we are now.


I'd like to point out that Threadripper using Ryzen 3 has not been confirmed or denied by AMD. The only dribble of information we have is Threadripper was officially removed from the roadmaps and anonymous sources saying it will still be released. The primary source for our Threadripper dreams says as much: https://wccftech.com/exclusive-amd-is-working-on-a-monster-6...

It'd be stupid for AMD to not release another generation of Threadripper with up to 64 cores, but we don't know anything for sure just yet except what has been detailed about the consumer Ryzen and the server EPYC.


AMD said Threadripper 3000 is coming; I speculate there will be 24/32/48/64-core SKUs.


That'd be my guess as well... I wouldn't that be surprised to see a 16 core entry too (get onto TR platform and upgrade later option). Though the $750 for a 3950X is already pushing my budget, I might consider a 24-32 core for under $1200 if that's where it lands. System budget around $3500.


There will apparently be a 64 core threadripper.


Seeing that screenshot and how high well the i7-6900k hangs with much more recent hardware impresses me. I'm quite glad I went with Broadwell E. Probably means I'll be looking to upgrade around Zen 2+.


as the owner of a 4.3ghz 6900k which I already find way too slow... cries


I've had my 6850k stable at 4.2ghz and I'm not finding it too slow, but I've also got a pair of servers I offload anything I can to, so that has definitely had an impact on my perception of how fast it is.

But I'd say it's far from being too slow. The boot times and load times for most software is fine for my use cases.

It was just too expensive of a build to justify an upgrade so soon.


Are the Intel scores from these benchmarks with meltdown, spectre etc. mitigations enabled or disabled?


This is good news but how many daily tasks come close to being represented by Cinebench?

Is Ryzen a better choice for a daily driver?


Cinebench is a good test for highly optimised graphics performance (mostly floating point arithmetic). This maps fairly well to many common compute intensive applications.

For more general benchmarking something like Geekbench is probably more useful, as it tests performance for memory intensive, cryptographic, integer and other workloads.

Here are the benchmarks: https://browser.geekbench.com/processor-benchmarks The top appears to be dominated by Intel, but AMD is often better scoring at the same price point.


Isn't cinebench mostly video encode and decode? If so, isn't that mostly per-architecture handcoded assembly? Is it possible that AMD has done work to make it more efficient specifically for their architecture?



Depends on your daily driving :)

For web browsing and playing video (i.e. 99% of all users' "daily driving"), I don't think it even matters to consider these sorts of benchmarks.

I do edit video and work with 3D graphics almost daily though, so this is sort of comparison is very relevant.


Not worth considering these benchmarks because there’s no effect, or because the system requirements for browsing the web are too low overall for there to be a perceptible difference?

In other words, of you were to actually measure page load speed, are you saying the measirements would correlate with the benchmarks?


My guess is that a 10ms difference in network latency would be a bigger impact on page load times than the latest Zen2 vs Intel CPU.

In reality I think for general web browsing the difference, if it exists at all, in either direction, would not be perceptible.

I know that's not EXACTLY what you're asking, but I think it's worth mentioning the practical aspect as well.


That seems to have been the "common wisdom" for a while now, but I'm not so sure that it holds so much today. With everybody coming out with bloated SPAs is it possible that faster processors will start to be required for normal web work?


but then, anyone with a recent(ish) Macbook is up-to-date for web-browsing and playing video (on the internet). Any post-Lynnfield Desktop-Process is quite a bit faster (starting at 3GHz) and since Haswell, even the video-playing is no problem (in higher resolution).


Nobody knows until the real world benchmarks arrive. But this is already a nice indicator for the next gen Ryzen's being powerfull.


Considering Ryzen beats intel clock for clock, core for core and price for the 3000 series, I'd say it's a safe choice. If you were only concerned about gaming at the highest end, then Intel was your best bet... 3000 Ryzen catches up on single core.


Jim Keller strikes again. I swear every place this guy goes, groundbreaking chip architecture work is done.


It looks like he left AMD in 2015 and as of last year works for intel...

https://en.wikipedia.org/wiki/Jim_Keller_(engineer)


Yes, he’s head of our architecture group now.

Source: I work at Intel currently.


It must be scary, imagine being this influential to the global economy.

His advances literally dictate the efficiency of 2% of the worlds power output, and that's just one metric of influence.


So it looks like the 3700X should be reasonably expected to be on a par with or better than the i9-9900K?

And that's very much in the middle of the zen 2 offerings... and at a significantly lower price.


Yes. The only question left is single thread perf. Over a range of applications. That's what everyone is waiting to see. Part of that will include how well it handles fast memory. Currently intel is in a different league when it comes to that.


> The only question left is single thread perf

I thought this benchmark was about single thread perf? From the article -

"However, according to a Cinebench score shared by Videocardz on Twitter, the 3600 levels with the Core i7-9700K in the single threaded test."

Whilst this is only one benchmark, it's looking like they've caught up or even overtaken on single thread performance too.


The IPC is almost identical. The biggest difference will be the maximum clockspeed. Overclocked Intel CPUs can easily reach 5GHz but AMD CPUs usually are between 4.4 to 4.7.Ghz which still is a 10% difference.


Overclocked CPUs are not exactly a mass-market thing are they? I mean, outside of a few niches, does anyone bother?


9900K has 5GHz turbo boost out of box, you just need to have enough cooling. I would say that best CPUs are not a mass market anyway and if you're talking about mass market, you should think about performance per buck and AMD beats Intel there since Zen introduction.


> 9900K has 5GHz turbo boost out of box, you just need to have enough cooling.

Anecdata here, but...

Depending on your type of workload, you don't need anything more than your typical heat sink and fan. Using Prime95 as a torture test, my 9900K will clock to 5 Ghz just fine on just air cooling without thermal throttling unless I'm doing the small FFT test, which creates a workload that fits entirely within CPU cache. I'm not sure if I'd consider that much of a real-world benchmark though.


For example on iMac you won't have 5 GHz. Video: https://youtu.be/f_TTGYC4tmo it runs around 3.9 GHz for prolonged test, while keeping (IMO) dangerous temperature of 93C.


Ridiculous that the person who posted that video is insisting that it's not undergoing thermal throttling. Apple is deliberately underclocking it to reduce the heat it generates.


I want to see performance between Zen 2 and Intel with security mitigations enabled on Intel (AMD don't need any with Zen 2).

Should be interesting and I suspect on even games AMD will be 10%+ ahead.


I'm quite excited with what AMD has pushed on socket AM4 over the last three years.

Come September they're going to release the 3950x,and if folks can wait a bit, you get 16 cores/32 threads on one socket.

But even on a budget, the 3600G looks like a great buy with Navi 20+3600 on chip.

The deal is that they have reasonable spectre mitigations working since last few gens, and Intel has to fix this if they need to be competitive. Not to mention standardizing a chipset.


Note that AMDs mobile/APU are this year's series number and last year's CPU architecture. If you really like the IPC gains for Zen2, you may be disappointed with an APU that only has Zen+.

Intel runs things the other way lately, mobile gets released first, then desktop, then server.


I haven't seen that 3600G. I've seen announcements for a 3400G and a 3200G, and although I'd love a APU with 20 Navi cores in it, I don't think it'll happen this time around. Maybe in a year or so.


IIRC the 3000 G series is 12nm Zen+ with some extra tweaks, not Zen2... same for the 3000 mobile series.


That's true. Same amount of cores, same amount of Navi cores. Clock speeds are better and iirc cache is bigger. I'm curious as to what AMD will put in their APU's once they move them to 7nm.


I need single core performance and I am maxing out 9900K @ 5.1. If AMD proves to be faster than that, show me where to pay.


Did you account for all the vulnerabilities paches in Intel? Some benchmarks might now show it, but in real life you'll get that performance hit (or you'll need to disable those protections and some people don't like it).


If your priority is single-threaded performance, why did you choose 9900k? I was under impression 9700k is single-thread king.


9600k, 9700k, and 9900k are all single threaded king, effectively, at least until Zen 2 launches on 7/7.

9600k has a 3.7ghz stock base clock, 9700k and 9900k have identical base clocks of 3.6; in order, their stock single core turbos are 4.6, 4.9, and 5.0.

Now, you could be asking yourself "but Diablo, in single threaded, the 9900k is 8.6% faster, and the 9700k is 6.5% faster than the 9600k, respectively". They're all k parts, just overclock it the tiny bit of the way for a fraction of the price ($230 vs $410 vs $490), and the 9600k has far more thermal and power budget per-core to play with than the other two do.

I almost built a new desktop with a 9600k, the $230 6/6 part that shames (in both single and multithreaded) the highest end $340 or $350 4/8 parts from the previous four generations (4770k, 4790k, 6700k, 7700k); vs the $360 8700k (6/12), it performs identically in single threaded, and illustrates that hyperthreading only gains you about 25% extra performance if you can saturate all 12 threads.

Remember, these are desktop chips, you are very unlikely to ever need more than 8 threads (and use them effectively), even if you game. If you happen to have a use case where you can easily saturate >8 threads, anything LGA115x is probably inappropriate for you anyways.

However, with the 9000 series release, Intel admitted AMD scared the fuck out of them, and re-released Coffee Lake at a lower price with some tweaks in speed and core/thread count because they were afraid of Zen 2 being a success; their fears were warranted.

The $250 3600x (a 6/12 part) beats the 9600k in single threaded, widely beats it in multithreaded, and has PCI-E 4.0, with an extra 4 CPU-bound PCI-E lanes (20 vs 16 on AM4 vs LGA115x), and uses a higher DDR4 clock (stock 3200 vs 2666, with an effective maximum of possibly over 5000 vs around 4133; AMD's IMC seems to continue to be effective at lower CAS latency than Intel's is).

Side note: 4790k, a Devil's Canyon part, is Intel's stand-in for the non-existent 5700k.

Devil's Canyon is a Haswell Refresh part that got a second set of 4000 series model numbers instead of 5000 series. Haswell Refresh was a Broadwell core paired with a DDR3 controller, fabbed at Haswell's node size; there are no architectural changes between Broadwell and Haswell in the core, only major changes was the decrease in node size and the swap to DDR4, the core design remained nearly identical. Broadwell largely ignored the desktop, focusing on LGA2011 and mobile parts instead, leaving Haswell Refresh to fill in the gap with the desktop and Xeon E3s, and the only notable exceptions being a small set of Iris Pro GPU parts.


> there are no architectural changes between Broadwell and Haswell in the core

There were certainly some changes. The gather instructions were dramatically improved, taking ~5 uops instead of ~30 and with much better throughput.

Conditional moves only take 1 uop in Broadwell, down from 2.

Some other changes listed here:

http://users.atw.hu/instlatx64/HaswellvsBroadwell.txt

Intel's tick-tock model was never black and white: even the "ticks" (node shrinks) received some changes and even new instructions.


Sorry, I should have written major changes. In practice, benchmarking identical Haswell vs Haswell Refresh (that, again, are effectively DDR3 Broadwells), such as with an E3-1230v3 vs a E3-1231v3, I did not see anything that wasn't within a reasonable margin of error.


That impression is wrong. 9900k clocks higher by default and usually overclocks a little higher aswell. Thus it's also single- thread king.


Interesting. Is it still faster with latest mitigations for spectre etc?


9700K has 12MB L3 cache and 9900K has 16MB L3 cache. It might help with some tasks.


At Computex they were claiming the 4.5Ghz 3800X has 1% higher single threaded performance in Cinebench than the 9900K, presumably both were measured at stock clocks. The 3950X has 4.7Ghz boost clock stock so it'll be interesting to see how high ST performance can be pushed with overclocking.


Switch away from Python? use PyPy?


Anyone know where to find an inter-thread latency benchmark comparison?


[flagged]


Actually I'm not doing anything, I just want to see some numbers. Nobody has anything?

Is there any public access to a shell on one of these things to run your own benchmark?


They won't be released for two weeks, so we're mostly seeing cherry-picked benchmarks that favor Zen because they have no core-to-core communication.


does this mean they broke the embargo for 7/7??


The source, Videocardz, deals in leaks. Someone broke the NDA but I don't see anything in the picture that would reveal who.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: