Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen 7 1800X Benchmarked – Giving Intel’s $1000 Chips A Run For It (wccftech.com)
427 points by mrb on Feb 19, 2017 | hide | past | web | favorite | 262 comments

More importantly for some, it's a 95W AMD part vs a 140W Intel part. If the power delta stays the same even for mainstream parts, does this mean AMD will become a good quiet option too?

By quiet I mean "I can't hear it at night" not "I can't hear it over the music".

Edit: Also, I think data centers these days are limited in density by power delivery and cooling. Even a 25W delta adds up when you're talking about thousands of servers.

TDP is a bit tricky to compare in this situation. The TDP of Intels HEDT chips is dictated by their 256-bit wide AVX units which drive power consumption through the roof when fully loaded, and the chips don't get nearly as hot when AVX isn't being used. Ryzen doesn't have this edge case because it sticks with 128-bit SIMD units and runs AVX instructions over two clock cycles.

I expect the "140W" Intel part will have closer power consumption to Ryzen in non-AVX loads, and outperform Ryzen in AVX loads while using more power.

Intel chips use a lower base clock frequency when they're running AVX code than when they're running non-AVX code for the very reasons you mention. In Intel's example[1] they talk about a base freqeuncy of 2.3 GHz without using AVX dropping to 1.9 with AVX[1]. That drop isn't much in frequency terms but you also have to figure in the voltage drop that the lower frequency allows you to cut power much more than you cut frequency. The end result is somewhere between power dropping with the cube or square of frequency depending on whether active or leakage power is dominating. Which is to say that they can save between 30% and 40% of their power by reducing the frequency at the same time they're lighting up a large area of new silicon. So I assume they chose the specific frequencies so that they're always hitting the same TDP whether they're making us of AVX or not.

Of course, performance scales not to far from linearly here so going to AVX is a large net performance win, as you say. It's just that I'd assume equal power consumption on non-AVX loads.


I agree with you but when I clicked your footnote source, oh god - the marketing wank. The problem with all of these benchmarks (starting with LAPACK in 1979, up until modern day benchmarks used in the TOP500 or the TPC which models a bank for RDBMS performance) is the synthetic nature of the tests and the unreasonable locality of what they end up testing. One of the bazillions of reasons why the base frequency can drop in those tests Intel used is because your CPU(s) isn't(aren't) context switching or having to do things a normal MSSQL or Oracle DB will do.

I.e., LAPACK/BLAS benchmarks are just really big linear algebra matrix problems, so obviously your pre-fetch and branch prediction performance will be significantly better since you aren't dealing with interrupts, locatedb, or Windows DCOM events firing off in the background. You have a huge set of matrices with a very predictable set of branches, fetches, and decodes, so obviously your CPU can optimize for that load, you're just paying for it in latency on the back-end (RAM fetches are the new disk swap ;)).

On all those benchmarks, (i.e. your standard LU matrix decomposition which previously was the basis of the LAPACK benchmarks, though things might have changed in the ~10 years since I've really looked at things) isn't CPU-bound anymore, so of course your instruction-per-cycle load on the CPU isn't where you'll be bottlenecking (and hasn't been since "let's avoid floating-point operations and just use static look-ups instead since we don't want the 10x cost of using the FDIVP instruction!"). Your processor can very easily anticipate from where in that sparse-matrix your next data fetch is going to be. It's the cost of that RAM fetch[1] going along that copper trace which is going to be where you're going to bottleneck on any heavy numerical computation.

The power consumption on your CPU might drop a nominal amount which is great for those marketing white papers, but for a numerically heavy load, you're paying just as much (in total power consumption per 4U in the data center, total heat generation/dissipation within the case, and total processing time) on the back-end for those fetches.

[1] https://i.stack.imgur.com/a7jWu.png (I normally cite academic references, but this is 'good enough' to convey my point, I hope).

> On all those benchmarks, (i.e. your standard LU matrix decomposition which previously was the basis of the LAPACK benchmarks, though things might have changed in the ~10 years since I've really looked at things) isn't CPU-bound anymore, so of course your instruction-per-cycle load on the CPU isn't where you'll be bottlenecking [...]

That's incorrect, DGEMM and most BLAS3 is way above most if not all processor uarch's aritmetic intensity threshold [1]. Broadwell CPUs are at 10 Flops/byte [1] while e.g. DGEMM is 32 Flops/byte [2], so that's definitely FLOP/instruction bound and not memory.

> Your processor can very easily anticipate from where in that sparse-matrix your next data fetch is going to be. It's the cost of that RAM fetch[1] going along that copper trace which is going to be where you're going to bottleneck on any heavy numerical computation.

You're mixing things up, it seems! LAPACK/BLAS is dense matrix, not sparse, so now you switched topics. Sparse matrix ops are generally >1 Flops/byte (see [2]), so that's indeed memory bound.

[1] https://www.karlrupp.net/wp-content/uploads/2013/06/flop-per... [2] http://www.siam.org/pdf/news/2090.pdf

The Intel chips (at least the Xeon ones, don't know about the consumer parts, but I'd presume they behave the same?) clock down when running AVX2-heavy code to fit within the power envelope.

And a likely reason why Intel tried to push the whole "Scenario Design Power" concept a few years back.

The 65W versions should be trivial to cool silently, a $40 heatsink with a single low speed 12cm fan (500~600 rpm) does the job well with my CPUs.

For a 95W TDP you might need to spend a little more (like on those big Noctua heatsinks with 14cm fans), but silent air-cooling is definitely possible. (Water is always trickier, AIO kits will typically be a little noisier due to the pump).

At these TDPs, fully passive cooling is an option:


That's huge :)

In a smaller HTPC it can make sense to go passive but for an 8-core desktop workstation I'm not convinced it's worth the trouble. Your PSU and GPU will likely have fans, also with a PCIe SSD it's better to have some airflow.

It's now possible to build very high-performance rigs that run completely silently. Palit offer a fanless 1050ti; Asus Strix 1060/70/80 cards will run with the fans stopped under light loads. Fanless 500w PSUs are readily available, which will happily power an eight core machine with a GTX 1080. The brilliant Silverstone FT03 case uses a chimney-like vertical design that can create significant airflow through pure convection.

Passive cooling isn't for everyone, but the remarkable efficiency of modern components has made it a perfectly feasible option even for high performance workstations. You can choose whatever level of noise you prefer, from extremely quiet to dead silent.

The biggest reduction in computer noise I've noticed over the past ten years has been eliminating HDD chatter by switching to SSDs. I still have one older PC that I use occasionally that has an HDD, and it is pretty serious. You can feel when the filesystem is under load by the frantic vibrations of the arm moving across the disk.

Fans, in comparison, tend to be a steady thrum, and thus fade into the background in a way that HDD chatter doesn't.

I second this. Even without chatter a spinning HDD is quite noticeable while fans are not that noisy as long as you are able to keep them at moderate speeds.


This is a good site to keep an eye on if you're interested in fanless computers.

It would be no longer fully passive, but I'm curious to see what the performance of that heatsink would be with a suitably-sized squirrel cage fan inserted into it. That is huge.

>I'm curious to see what the performance of that heatsink would be with a suitably-sized squirrel cage fan inserted into it.

Surprisingly poor. A large conventional heatsink like the Noctua NH-D15 has far greater surface area. The densely packed fins of a conventional heatsink perform poorly if you're relying on convection, but they will dissipate a lot of heat with even modest airflow.

6 core 140W 6800K I have under my desk is cooled with a Noctua NH-U14 - very quiet and max core temp I've seen under load (mostly video transcoding) is 54 degrees - that's about 30 degrees above my office room temperature.

AFAIK the Ryzen 1700 (TDP 65W) is supposed to come with a "Wraith" cooler from AMD and those new ones are apparently better than their shitty older counterparts from the Vishera days (I had one and I'm glad I switched to a near silent Arctic cooler).

Same here, the Artic Cooler was 20$ and the difference is dramatic. Cooler temps all around and near silent at modest loads, where the stock cooler was annoyingly loud.

Worse, loud fans seem to fail the most. I've always been annoyed when a $75 CPU comes with a $.25 fan that is going to crap out in a year or two.

Maybe Calyos[0] technology might interest you. Fanless and totally silent technology, here is the other Linus[1] talking about it.

[0]: http://www.calyos-tm.com/calyos-fanless-pc-workstation/

[1]: https://www.youtube.com/watch?v=9PJOrfpiVwE

Slick. Heat pipes from CPU and GPU back to a large passive cooler embedded in the case. I'd be curious what the limits are - how much heat can it absorb before the lack of active cooling affects its performance.

Is AMD still publishing the real power envelope of their chips while Intel is publishing "normal" power consumption?

I believe it is, AMDs TDP is the maximum the CPU will consume whereas Intel is maximum under normal workloads meaning you can push it past the advertised TDP

If your goal is quiet and quiet alone, i would suggest a good cooler. A cooler master hyper 212 is really quiet.

I thought data centers are going for ARM cpu's now, as shoveling data is not all that cpu intensive ?

> A cooler master hyper 212 is really quiet.

I second this. I've been using the Hyper 212+ for 5 years and it's been great. The 212EVO looks just as good. Only drawback is the height (not an issue with larger cases).

Funny aside: I actually replaced the fan on my 212+ thinking the one it shipped with had failed. Turns out I'm an idiot and either didn't install the rubber dampeners correctly or the mounts came loose during shipping (can't remember which). The replacement's been solid for 5 years and although the original was relegated to spare (nothing wrong with it), I doubt I'll ever need it before I retire this build.

DEFINITELY better than the crummy cooler my Phenom II shipped with. That thing was noisy and ineffective. Next upgrade is definitely including the newer series 212 as well.

All CPUs are silent when you water cool them. You can get a decent closed loop cooler for <$100 now.

Actually it very much depends on what kind of fan cools the radiator. You're using that liquid loop just to move the heat from the CPU to somewhere else. That somewhere else is generally a large radiator with a quiet fan, but not always.

Anyone building their own silent pc should hang a bit on silentpcreview before they buy anything.

Water cooling rigs are relatively noisy by modern standards. A really quiet pump produces about 19dBA@1m. That's about the upper limit of "quiet" these days. On top of that, you have one or two fans blowing through a radiator. A good air cooler will easily handle 125w TDP at acceptable temperatures while producing as little as 11dBA@1m.

A Noctua NH-D15 or a be quiet! Dark Rock Pro 3 will match or exceed the performance of a 240mm AIO, with considerably lower noise in a decent case.

There aren't a lot of applications where water cooling really makes sense. If you're cramming a very high performance rig into a tiny ITX case, you might be better off with an AIO. Competition-grade overclocking rigs will naturally benefit from a custom open-loop watercooling system. Otherwise, modern components just don't put out enough heat to stress a good air cooler.

The performance of water cooling can be deceptive, because of the relatively high thermal mass of the water in the loop. If you fire up Aida64 on an air cooler, the CPU temperature will climb fairly rapidly before levelling off. A water-cooled CPU will see a much more gradual increase in temperature, taking up to an hour to reach a maximum. Water coolers look really impressive if you only stress them for a short benchmark run, but they're far less impressive under sustained load.

These last few years Intel was much working much more on bring power consumption down than ... anything else? The performance changed little: http://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_vs... 20%? But now the 15W U series mobile chips deliver the same performance as 35W M chips in the Sandy/Ivy Bridge era which is quite impressive.

This, of course, leads to a tremendous advantage in server CPUs -- if you have capable cores at a lower wattage, you can add more of them. Hence the 14 core E5-2690V4 @ 135W at 2.6Ghz vs 8 core E5-2690 @ 135W at 2.9GHz. So in just four generations from Sandy Bridge to Broadwell, no TDP change, roughly the same or a bit better single thread execution but almost double the core count. If you are willing to drop a tiny bit your base clock further and your TDP higher, perhaps 10% less single thread performance, you can get a hulk of a 24 core E7-8890V4 at 165W. And that's where the big profit is -- currently.

Now some unfounded nonsense: what if Intel is not pulling these crazy prices out of their sorry behind but there's in fact some reality behind them? It just bothers me that the price of the 24 core chip is so close to the 6*6 times of a 4 core chip. It could be a coincidence.

I think you sum it up quite good: The thing that Intel did mostly was putting more cores on the chips. However that happenend only for the server lines. With Sandybridge and before the top desktop and server chips were nearly identical, now there's a huge difference. Which also means that during the last years the performance of desktop CPUs increased barely. That might be ok from the point of view that normal users would mostly not benefit from more than 4 cores. However assuming that developing and manufacturing the top-end model of each generation costs roughtly the same the new generation desktop models should probably be a lot cheapter than they are currently sold for.

There's been a fork in CPU design goals. Before, the fastest chip was the goal. Now, we have fast single-threaded vs high parallel throughput - and while there's a spectrum between them, the extremes matter, it's not a case of a Goldilocks solution.

Consumer loads are mostly limited by single threaded performance, though software is increasingly written for more cores. Best choice for a consumer used to be dual core, now it's probably quad core with thermal limited boosting on less parallel workloads.

It's only prosumers doing lots of transcoding or rendering, or CPU intensive VMs, that typically benefit from increasing cores above 4.

Whereas the server space prizes throughput and much of its workload is trivially parallelizable. On the server, higher single core speed mostly just decreases latency for small tasks; if you're happy with the latency, you can get more cores working on shared memory and potentially get big wins in perf.

There are diminishing returns though, scaling up boxes is expensive and unless it's being forced by software licensing or architectural models, things like Hadoop and Spark for spreading the load across a whole cluster are increasingly attractive. This helps solve the I/O throughput problem too.

You also had the whole thing AMD seemed to be attempting with their first gen APU design, shifting the floating point workload over to GPGPUs rather than use a dedicated floating point unit.

Server CPUs are often used in workstations, which are pretty much desktops.

I just speced a new laptop with a Xeon E3-1505M v6.

Xeon E3 desktop or mobile is just i5/i7 with ECC support. Your CPU falls square between i7 7820HQ and 7920HQ frequency wise (and 100 MHz at 3GHz is a negligible difference) and aside from ECC there's no difference: https://ark.intel.com/compare/97462,97496,97463

The big difference is in the E5/E7 chips.

Hm, wished we also jumped from 2 to 4 cores on U processors. Then we we should have finally a big jump for consumer notebooks. My U Core i7 from 2013 is still one of the better ones in 2017 when looking at raw CPU benchmarks which makes me think: What happened the last 4 years? 16% increasement only when comparing to top i7 U processors from 2017?!

With the -U types, technology gains have been invested in lowering TDP, making for smaller and longer running laptops.

E.g. with my last laptop change, I went from a 35W TDP i7-M to a 15W TDP i7-U, while still gaining some performance.

Laptops with 35W/45W CPUs still exist, but they use e.g. the quadcore i7-6700HQ.

Very much this. I replaced an old 6700HQ laptop with the latest 7500U and battery life is worlds better. My previous machine was lucky to go three hours, and that was intentionally throttling the CPU, while my new machine can go 4-5 hours pushing it hard, or all day if I'm not doing anything intensive.

> old 6700HQ

I'd hardly call that old.

And that's not an apples to apples comparison. The 6700HQ is a quad core part, whereas the 7500U is dual core (both of them feature hyperthreading, so 4 and 8 threads respectively).

Given equivalent software, the 6700HQ will outperform the 7500U in multithreaded workloads.

Mainstream market moved to smartphones & tablets, so the effort.

But intel isn't doing smartphone cpu's, so again: what happened?

1-2 years ago they where though, but for whatever reason they never managed to break into that market.

That's true if I look at the raw CPU benchmarks of ARM chips... there it is normal to have each year 40-60% jumps which is crazy.

Except AMD's CMT implementation blows Intel's outta the water.

If intel needs 20% more cores to match the MT perf on the high end then it really isn't an advantage.

The Clustering MultiThreading (CMT) part of AMD's recent Bulldozer generation seems to have worked very nicely by itself. But the cache heirarchy of the chip was quite poor and it's not clear whether using CMT led to the cache problems they were having or not. In the end Bulldozer used CMT and performed poorly. Now, AMD have abandoned CMT in favor of the Simultaneous MultiThreading (SMT) that Intel uses in Zen and this chip looks like it's going to be performing quite well. So without more details than we're ever going to be privy to it looks like CMT was a dead end.

Wow... I really hope this is true and that it's a sign that AMD have their mojo back. Partly because I have a sentimental favoritism towards AMD, and partly because Intel (and Nvidia) need competition.

AMD giving competition to NVidia would require them to actually put some effort into their drivers to get tolerable performance and a list of bugs that isn't as long as my arm.

You say that as if nvidia's drivers aren't also utter trash.

NVidia's drivers aren't regularly blacklisted[1] or complained about[2] for being absolute crap

[1] https://github.com/PCSX2/pcsx2/commit/26993380b16487649c2ae5... [2] https://dolphin-emu.org/blog/2013/09/26/dolphin-emulator-and...

point 1 true,

point 2 bullshit, people complain about nvidia's drivers all the time.

2 is arguable because every single manufacturer has terrible drivers, true.

As a developer, the amount of inconsistencies in AMD drivers is baffling.

An AMD 40Mhz 80486 was my first build so I also have a soft spot. At the time it was the fatest x86 one could buy. Gees I even remember lusting after the Ati 8514 Ultra, one of the earliest GPU accelerators, clone of IBM's high end workstation standard of the early 90s, as a teenager. Wow I'm getting old.

Conversely, though I admire Intel for its central place in advancing computing itself, I cannot love the company because it has shamelessly monetized its monopoly over the past 7 years or so with vast overpricing on higher end lines. I for one am definitely doing a Ryzen 7 build as soon as I can get a chip, and the same goes for Vega. So happy to see AMD back in the game.

I remember those days and bought AMD then too. There was even a third competitor in those days, Cyrix.

I remember cyrix being later in the game with its 686/166 Mhz proc which did wonders.

Cyrix had a 486.

They marketed 5x86 and 6x86. Faster integer performance than Intel; the FPU was the weak spot.

and 586 and 686: https://en.wikipedia.org/wiki/Cyrix_6x86 They used a Performance Rating, so the "PR166" wasn't actually running at 166 MHz but 133 MHz. The performance rating was meant to show that it was "equivalent to a 166 MHz Pentium".

Don't fotget WinChip too...

Mine was an AMD 386DX-40, followed by a 486DX4-100. And over the years, I've remained somewhat partial to AMD based systems, even when they didn't have the edge in absolute performance. So yeah, I would love to see AMD get it together and start kicking ass again.

I also like AMD because they've been at least a little bit more "open source friendly" than some of their competitors (cough Nvidia cough). Which is not to say that they couldn't do more, of course.

AMD 386DX40 --> AMD 486DX5-133PR75 --> K6-2 300MHz --> Duron 900MHz --> Athlon X2 1.5Ghz --> FX4100 --> FX8370E

There is so much hype with AMD atm. In particular sites like WCCFTech have gone almost rabid with hype. My experience is that AMD tend to over promise with their marketing and rumours. So whilst the hype may well be warranted this time, I'm going to wait for the actual reviews to come through and take these leaks with a grain of salt.

Fair enough, but this is a bit more concrete than marketing fluff.

To be honest, Intel marketing overpromises too. A lot. I think people are just so used to it happening all the time that it's nothing out of ordinary, so it doesn't even look like hype.

Take everything WCCFTECH writes with a pinch of salt, all of this could still be fake. The Ryzen hype train is on full steam and i really hope they deliver, but i'll wait for official benchmarks.

In the last few years, more often than not AMD hardware leaks have shown potential that in the end wasn't met. Things look good for Ryzen though, maybe they can finally make 6-8 Core CPUs mainstream.

Which website would you consider "official"? Anandtech?

By "official" I think he meant the benchmarks done by the websites themselves, using proper methodologies. In this article the benchmark was submitted by a user, using synthetic tests from a single application.

this article is based on images some anonymous user posted on chinese forums, so by official i just mean placed like anandtech or techpowerup that do their own benchmarking in a professional environment.

My experience is they actually do deliver promises, yet always too late. It was the case with the 120Mhz 80486, the K6-2, the Athlon, the Opteron, and I'm just getting started. All about as fast as their marketing claimed, but simply too late so the nextgen Intel caught up for the most part.

Uh... did you forget about Bulldozer?

It did bad compare to what their marketing team was telling their users.

The past two or so architecture underperform what the marketing them had in the slides.

Bulldozer and recent GPUs like Fury/X or 3xx/4xx Series underdelivered after initial rumors had them beating everything.

I'm trying to put this into perspective.

AMD having temporary edge over Intel has happened before. Remember when AMD had Athlon in 1999 and they made huge $1 billion in profits.

Intel's response has always been the same. They cut their profit margins and start selling chips cheaper. They undercut AMD with volume and price and suddenly AMD is in the doghouse again struggling. AMD's best efforts can cut into Intel's profits, but Intel's response is to remove all profits from AMD until it's left behind.

Just compare these two:

Revenue/gross profit/gross profit margin (Sebtember 2016)

AMD $1.1B/$930 mil/4.5%

INEL $60B/$16B /60+%

Even if you add $5.5B revenue from GLOBALFOUNDRIES (manufacturer of AMD chips) to make AMD camp comparison more relevant, there is large difference.

>Intel's response has always been the same. They cut their profit margins and start selling chips cheaper. They undercut AMD with volume and price and suddenly AMD is in the doghouse again struggling. AMD's best efforts can cut into Intel's profits, but Intel's response is to remove all profits from AMD until it's left behind.

The last time Intel did that it was through illegal tactics - unlikely they'll get away with it a second time.


Doing illegal is just optimization.

Intel can undercut AMD lawfully if it wants. Instead of hidden rebates, Intel can openly cut prices.

You're failing to acknowledge that AMD can just as easily cut prices to compete. If Intel starts selling at a loss they'll be subject to even more punitive damages than they faced for bribing OEMs. Not to mention they'd be hung out to dry by shareholders.

> AMD can just as easily cut prices to compete.

They can't. They don't have profit margins or cash in hand to do that.

> Intel starts selling at a loss they'll be subject to even more punitive damages t

I think you failed to see my argument. Intel has 60% profit margins. They can cut their prices a lot without making a loss. AMD cant.

That first line is amazingly brazen. If a movie is made about Intels monopolistic practices that fully belongs as a dialog spoken by the Intel CEO.

It was. The whole debacle was probably just decision between few percentage points in Intel's profit margins. For AMD it was much bigger issue.

The usual reason mentioned for Intel allowing AMD is to avoid being broken by monopoly laws.

If AMD still has energy and get loads of cache, they might be able to finish the APU idea, now that opencl, gpgpu, ml, cv has caught wind, it might give them a fresh market to strive.

I really want AMD to succeed! Intel's lack of progress and monopolistic behavior have become embarrassing. Competition is best for the consumer. I am really thinking of making all AMD rig after all chips are out.

One must bear in mind that wccftech has always been one to leak stuff. Take what you might want to from that.

More importantly though, Intel definitely needs competition. This is really good news and I hope it plays out well.

It'll put price pressure on Intel, that's for sure, but I don't know if it'll make Intel work any faster on boosting performance. They're having enough trouble with process and yield as it is.

It's interesting that this time around, if these numbers are true, AMD is not only faster performing and less costly, but also lower power. In the past they've always run hot, noisy, but cheap and fast-enough.

Maybe their experience in optimizing GPU production is paying dividends here.

What about ECC memory? Intels desktop processors fail hard on that feature.

Why do you need that in desktop env? What's the usecase when it can be beneficial to have expensive ECC RAM instead of the ordinary one?

I was always thinking that ECC can prevent blue screens/kernel faults, but I haven't seen those in years on my laptop without ECC.

> What's the usecase when it can be beneficial to have expensive ECC RAM instead of the ordinary one?

Anyone using ZFS will (or should!) care about ECC support. [0]

Lots of people build their own NAS/SAN boxes, so ECC support on a desktop CPU at a reasonable price point would be very appreciated. Currently you need to buy specific model CPUs (Celeron or Xeon, IIRC) to get ECC support from Intel. [1]

[0] https://serverfault.com/questions/454736/non-ecc-memory-with...

[1] https://ark.intel.com/search/advanced?ECCMemory=true&MarketS...

There is nothing special about using ZFS with/without ECC.

You are getting the same benefits using any filesystem while running with ECC.

Also the "myth" that wrong checksum calculations due to a bitflip will degrade a ZFS filesystem even faster is not valid.

There are some articles/newsgroups around explaining this in much more detail.

So: Using ZFS on non ECC is not inherently unsafer than other filesystems on non ECC.

And here is Matt Ahrens, a ZFS co-founder saying the same ("there's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem"):


To be clear: ZFS performs a checksum _before_ the data is written to disk. This checksum proves to be highly useful to ZFS when checking whether the data being read back is correct or not.

However, the checksum and the data is also only as good as the memory in which it is stored during computation. Since ZFS takes care of everything else other than memory, you'd use ECC if the data being stored is important enough that you want to ensure that ZFS gets healthy checksums.

If ZFS is anything like BTRFS, then I don't think ECC is super necessary. Yeah, it'll prevent problem from eventually cropping up when the RAM shits itself, but it's not like those issues compound silently in the night. When my ram started going a bit down the drain, (Flipping the occasional bit. Rare enough that the system still ran fine) I was hounded with checksum errors and any attempt to work with the data was met with error messages until I got new RAM. I don't think ECC is really all that important unless you're dealing with data worth thousands of dollars or more.

IDK, I've seen some odd RAM failure modes where the base OS would run fine, but then depending on the load (e.g. games) some weird stuff would happen like a BSOD after exactly <x> minutes of gameplay.

Once ZFS writes corrupted metadata with corrupted checksums to the disk, it's very hard to recover that data. Yes, ZFS is not backup, but the implications of serving clients messed up data is also worrying in certain business cases.

Given that ECC RAM isn't (much?) more expensive, why not just use ECC? (For my microserver I remember it being the same price as non-ECC.) Well, because CPUs with ECC support are rare, etc.

Once any filesystem writes data that was corrupted in memory, the data is hard to recover. And any filesystem will serve this corrupted data if asked, since it is all that it has. This is not a ZFS specific failure mode.

Without ECC, ZFS loses one of the guarantees that it otherwise provides. But it only degrades down to how bad every filesystem is in the face of memory corruption - not worse.

The RAM itself isn't much more expensive, it's a compatible motherboard and CPU that add to the price. If you can't afford even a temporary loss of data it's well worth the cost. That is true with whatever filesystem you choose, and ZFS is always a great choice.

This is why I ultimately ended up buying an HP ML10 for my FreeNAS box, $300 for a Xeon E3-1220, board and iLO ended up being cheaper than buying a barebones + the CPU (which is just shy of $200 retail). I would like to have something that supports more memory, but to get anything with more sockets or support for larger DIMM's you start looking at the Broadwell-E or Xeon-E5 chips and those are considerably more expensive (my ThinkServer TD340 cost me $700 as an open box with a E5-2403v2 and 8GB of RAM installed).

If Zen client chips have full ECC support and can handle 64GB of memory I'll be sold easily on an upgrade for my TrueNAS box, then I'll anxiously await some lower-cost (4/8c) dual-socket server CPU's and swap out my TD340 with some supermicro barebones build.

Do you know how much power does it use? If so, could you add some details (number of HDDs, typical load - light, medium, heavy)? I'm looking into building a home server and would like something decently beefy, not too power hungry and possibly with ECC.

I know the ML10 only has a 300W PSU, but I haven't bothered to take a kill-a-watt as the ML10 itself would be outshined by the rest of my lab gear.

The E3-1220 is usually idle and frequency scaled back unless something like a ZFS scrub is running, I've got ~45W of PCIe cards (SAS HBA, 10GBe NIC and 4x1GBe NIC), 4x8GB sticks of DDR3 UDIMM's probably uses 12W.

I'd say without drives it probably draws no more than 150W on average, unless I'm doing something CPU intensive. My drives all sit in an external SAS enclosure, since I wanted more than 4 drive bays that the LFF expansion bracket provided.

Anyway, the ML10 is pretty decent for light-medium work. It's got 4 really fast cores and 32GB of RAM is adequate for most home server use (this was why I got the TD340 though, I've got 72GB in it) - just beware that the Gen1 units don't include the drive bracket so if you want more than a single HDD you have to purchase one or get an external SAS enclosure.

EDIT: the ML10 is basically silent too, even under load - I can't hear the fans unless I try (though my SAS enclosure makes up for this by being the loudest bit of kit in my lab).

Great use-case for the server-grade Atoms. I have a freenas build with an Avoton 8-core and it's fantastic. Asrock Rack c2750D4i, check it out. Can't wait for the new Atom server chips as well, and I hope AMD launches a competitor in the low-power / storage server market.

I am looking into replacing my home server / NAS, and I considered the atom server chips, but they seem just so expensive for their computing power, and also not very power efficient. The 8 core C2750 has a passmark of 3800[0], with an abysmal single thread performance of 579, and still has a TDP of 20W, and the mobo+cpu assemblies go for ~400$.

For way less, if you're willing to five up on ECC, you can get an i5-7500T and a pretty good motherboard, bringing your TDP to 35W but having a passmark of 7055, and most importantly, a single thread rating of 1924.

This might not matter much if you're use is exclusively NAS, but I will probably end up running some virtualized or containerized server, or streaming video possibly transcoding on the fly, and I'm afraid the Atom might become a bottleneck.

Is there a solution that is somewhat competitive with the i3/5/7 on price and power, and has ECC? And, ideally, that comes in a Mini-ITX form factor?

[0] Obviously the PassMark score is only a ballpark estimate to get an idea of how fast is a chip, but still.

Thanks. It looks there are reasonably priced mini-ITX boards that support Skylake Xeons (a note to this that are considering this: v5 E3 (Skylake) use the same socket as Cores, but need a different chipset).

The 80W TDP worries me about though, cooling, noise and money/pollution wise (the extra 45W make around 400kWh/year).

Considering the CPU will be idle most of the time, do you have figures about the actual idle consumption of an E3 based machine?

My home server with a Xeon E3 1240v5, 5 3.5" hard drives, one SSD, a SAS HBA and a small discrete GPU idles at 71W as reported by my UPS. I haven't made any special effort to optimize power consumption yet.

If you end up buying high TDP chip, you can simply cap the clock either in BIOS or in kernel to save power.

Aren't the Avoton chips the ones that is causing Cisco to have to replace a crapload of network gear?

Yes. The C2xxx series Atom has the errata. [0] It's not just Cisco gear affected, but Cisco are being the most proactive about replacement.

[0] https://www.theregister.co.uk/2017/02/06/cisco_intel_decline...

Some links that may give more background to the area:

Coding Horror To ECC or Not To ECC [1], What Every Programmer Should Know About Memory [2], Memory Errors in Modern Systems [3], and an analysis of memory errors in the entire fleet of servers at Facebook over the course of fourteen months [4].

[1] https://blog.codinghorror.com/to-ecc-or-not-to-ecc/ [2] https://people.freebsd.org/~lstewart/articles/cpumemory.pdf [3] https://www.cs.virginia.edu/~gurumurthi/papers/asplos15.pdf [4] https://users.ece.cmu.edu/~omutlu/pub/memory-errors-at-faceb...

It is kind of funny that we even accept non-ecc memory and say that it is perfectly OK that sometimes a 0 written to memory turns to 1. This would be more understandable if we treated the memory as unreliable but in most cases we don't.

ECC (error-correcting codes) don't completely eliminate errors — they reduce probability of them happening — so even with ECC memory you're still accepting that it's perfectly OK that 0 is sometimes 1, just a lot less likely. No absolutes here, unfortunately.

This EXTREMELY reduces the probability that a bit flips happens. And consumer PCs (or mobile devices) are the only devices where there is no ECC memory, RAM is the only place in such devices where there is no error detection and/or correction.

100%. Everywhere else including spinning disk, SSD, ethernet, TCP/IP there is error correction.

The symptoms of memory corruption are much more varied (and can be much more stubtle) than BSODs. For example they can cause filesystem corruption, any random app crashes (SIGSEGV...), infinite loops, etc.

That is why it is important after building/tweaking a system to do stability tests. If it runs full out without errors for 24h then you are fine.

That's.....not how non-ECC works. Even a little.

RAM bitflips randomly, period. It's just how it works. A cosmic ray can hit the memory chip just right and flip it, there's no way to predict or control that no matter how "stable" your machine is. ECC still does the same, it just has a parity bit on each line to confirm against and flip it back, as needed.

It is how faulty RAM works, and it is sensible to do that.

Not just faulty RAM. As far as I know, all modern RAM is going to have bitflips every once in a while (it even happens with ECC RAM)

I urge you to educate yourself. All RAM works this way, it's the very reason ECC RAM exists and the exact issue it solves.

I have no idea, why you downvoted. Stability test is important step of cluster building. If probability of something broken in new server is 10% then probability of something broken in new cluster of 10 servers is 1-(1-0.1)^10 = 65% .

Because they objection has nothing to do with the original comment. Bitflips are not a RAM stability issue.

"Bitflips are not a RAM stability issue."

Cosmic radiation bitflips are BS. No cosmic rays reach ground level. The chance of a, say, Al-28 nucleus successfully penetrating the entire atmosphere is as close to zero as it could be possible to get. Basic physics. Something with such a high charge density won't penetrate ~100km of atmosphere and magnetic field. Even a basic muon wouldn't get through a sheet of aluminum foil and those are still capable of actually getting (barely) through the atmosphere.

The chances of cosmic radiation causing a bitflip are pretty much in the range of "Elvis coming into town on Nessie." Radiation originating from inside the system itself is much more likely a cause.

Attention downvoters: Cosmic rays don't even make it through the atmosphere. They are depleted atomic nuclei which essentially smash into another atom and that's the end of it. Not even multi TeV cosmic rays, which are rare, get through. They never "pass straight through the Earth" either, that's pretty much reserved for neutrinos only. Particles with energies of about 10^18 eV arrive at the rate of about one per square kilometer of atmosphere per century. These very high energy cosmic rays are detected at the ground by looking for the secondary photons, electrons, muons and neutrons that shower large areas of the ground after the primary particle impacts the atmosphere.

Your ground-level bit flips are most likely caused by terrestrial radiation sources, not extra-terrestrial ones. This is just basic physics.

Bitflips are not a RAM stability issue, they happen randomly due to radiation, but radiation is not random, especially in my area (I live not far from Chornobyl).

Sure, but you won't solve excessive ambient radiation by running a stress test on your RAM.

A small radioactive («hot») dust particle will not change average radiation level a lot, but may cause problems with memory/cpu. Simple cleaning, by blowing dust out, fixes it. Saw that dozen of times, but years ago.

Did you use a geiger counter to verify the dust was hot? Would something large enough to cause problems with the ram be large enough to detect?

er.. you should probably move. Bitflips aren't the only things radiation does. Bitflips in DNA for example.

Too late, radiation is returned back to almost natural level (about 30% higher than natural). I have detector of radiation, so I can measure it myself. Sometimes wind can bring some radioactive dust from Chornobyl, e.g. after forest fire, but it is much less dangerous.

Desktop computers are sometimes used for actual work where data integrity is important.

Only a small minority of main memory data corruptions lead to OS crashes, mostly the in-memory application or filesystem data just silently gets corrupted.

Besides many obvious reasons (preventing real bitrot), it's also a security factor acting against attacks like rowhammer.

ECC RAM isn't that much more expensive despite being less common. It's a bigger premium to go from an i7 to Xeon than to equip the Xeon with ECC RAM. If ECC were manufactured in the volumes of non-ECC it would probably be almost exactly 12.5% more expensive (i.e. just the cost of the extra chip).

> It's a bigger premium to go from an i7 to Xeon than to equip the Xeon with ECC RAM.

Not exactly. The prices between comparable i7 to Xeon are nearly the same.

Wow, I just priced skylake Xeon's vs i7 and prices are very comparable (in some cases the Xeon is even cheaper). Current Xeon v. i7 was significantly different last year when I purchased.

I did check and it is still true that with laptops you always pay a premium for Xeon v Core on an equal performance basis (even within the same model).

That is like asking: "Why do you wear a seatbelt if you don't even drive a racecar?"

Because I care about my data. Data corruption may kill your main storage and the first backup, too.

There's no need to down vote him. ECC is extremely uncommon on desktops & laptops. Yes, it's always going to be preferential to have it, but it typically comes at a price point that excludes it from these platforms.

That's a problem, not a manifesto to continue to live with it... And if the prices difference are large, they are largely artificial. That means if ECC is more widely uses, prices difference will decrease to reasonable levels (at the marginal level, price of RAM with ECC should be ~= 9/8 price of RAM without ECC, and the price of supporting platforms should be only very slightly higher than the price of non-supporting platforms)

Fair enough - I'd like to see it happen, just don't think it will.

Why the downvotes? It is a legitimate question, because ECC is very uncommon in desktops and laptops.

Companies I worked for used those only on the servers where we run critical stuff, not a single developer laptop had ECC.

I believe ECC RAM protects against RowHammer, doesn't it? That would be reason enough to get it.

It mitigates. IIRC DDR4 also mitigates. With both at least I'm pretty sure RowHammer is not a security problem anymore, and maybe even the risk of crashing disappear completely (or have an extremely small probability)

This question comes up constantly. Anybody doing anything with more than 16GB, especially data science (think all the R and Python people, all scientists), anybody running a memory cached data store (all server side people on mongo, redis, etc), and anybody doing finance or engineering, wants ECC. Basically anybody doing anything where data persistence is important, and/or where even the slightest chance of silent corruption is catastrophic. And with Intel you had to spend well north of 1k to get it on an 8/16 system. Hoping Ryzen does it.

> Anybody doing anything with more than 16GB, especially data science (think all the R and Python people, all scientists)

Not sure I agree. Most science datasets (both from simulations and experiments) are sufficiently noisy that if your scientific end results and conclusions change as a result of even thousands of bitflips in your 16 GB of data, you're Doing It Wrong and your article isn't worth the paper it's printed on. (There are probably exceptions, as always, but those working in those few specific subfields should be aware of it.)

> Not sure I agree. Most science datasets (both from simulations and experiments) are sufficiently noisy that if your scientific end results and conclusions change as a result of even thousands of bitflips in your 16 GB of data, you're Doing It Wrong and your article isn't worth the paper it's printed on. (There are probably exceptions, as always, but those working in those few specific subfields should be aware of it.)

There is way too much overreach in this statement. Imagine doing Finite Element or Computational Fluid Dynamics analyses; bitflips of the floating-point values in the field solutions, which could easily make those values completely unphysical, are not the kinds of errors the solvers are written to guard against. In order to do so, you'd need to sanity-check every value, and if you had to use a "guardrail" value, it could easily take a significant number of iterations to recover to the more correct value. Solvers can be easily crashed by corruption of numbers. Sure, if you're lucky enough to have bitflips in low-order mantissa bits, no real harm done. Just don't expect the bitflips to cooperate in this way.

Maybe the "big data" and machine learning crowd don't care about some corrupt values, but most numerical/scientific computing is not so sanguine about corruption.

It is a little frightening how we are moving into significantly larger computational solutions, but are simultaneously increasing our exposure to the fragility and lack of guarantees regarding enormous quantities of perfect bits at all times.

I do CFD, so let's take that example, and say you have a bit flip in one value in one of your fields. There are four cases to consider:

a) bit flip happens so high in the mantissa it makes your code crash. detectable, you run simulation again

b) bit flip happens somewhat lower in the mantissa, enough that it shows up in your analysis. detectable as unphysical result, you run simulation again

c) bit flip happens even lower, you don't catch it as unphysical in your postprocessing and analysis. Here's where I'm saying: if a bit flip happens like this and affects your simulation in such a way that you don't see it's an error, but it still changes your end result and conclusion, and you're not running replications of your simulation to test robustness etc., you're Doing It Wrong.

d) bit flip happens even lower, same order of magnitude as numerical errors. nothing bad happens

Yes, there are some algorithms that are iterative in nature (e.g. iterate until some error threshold goes below some specified limit), that can be tolerant to random corruption. Many other algorithms aren't. And even if you use such a tolerant algorithm, you may still get a bit flip e.g. in the code, or in some data not touched by the iterative algorithm, etc. etc.

There's a reason why HPC systems universally use ECC.

> There's a reason why HPC systems universally use ECC.

I was under the impression that HPC systems universally use ECC because at that scale, the probability of memory errors in the OS are large enough to cause constant instability of some of the nodes?

As soon as you are putting a noisy dataset through multiple iterative algos, though, if a bit error hits the control flow code or data structure delimiters, you face the possibility of massive silent, propogating corruption. Imagine your hash table getting a bit error in it (all R lists and Python dicts use them), or an incorrect branch in your code. Admittedly the risk is ultra low, but you have enough problems to worry about to have, in addition, a niggling sensation of RAM-risk as you work.

More generally, it's not unreasonable to say that our entire computing paradigm rests on accurate RAM. Above 16GB, the risks just become too big for anybody doing serious work, and not just messing around with prototypes.

> Admittedly the risk is ultra low

Exactly this. The size of your code is positively infinitesimal compared to your data. And unless you're writing your code and then running it exactly once, which is a) even more unlikely and b) bad practice, you'll catch any of those bit-flip errors in your code or data structures.

This has been discussed a lot in the literature, especially for GPUs where ECC carries a performance penalty both on speed and available memory, e.g. in this paper where they've tested it on a GPU cluster:


OK, but aren't real calculations done on servers (those usually have ECC). During my PhD days we were only testing on local desktops and scheduling calculations on remote server that had a lot of RAM and CPU power.

Even so, ECC on Ryzen would make said servers cheaper, even if only by forcing Intel pricing lower. ECC on Intel is very overpriced.

Competition please! Intel/Nvidia monopoly has brought prices of CPUs and especially GPUs to insane levels.

Ditto. Here's hoping the Ryzen will pan out and force Intel's hand.

Ryzen leaks conspicuously don't mention gaming or the i7-7700K. AMD may win in a very small market yet lose the mainstream.

The "mainstream" market doesn't buy 7700K's. The "mainstream" market doesn't even know what hyper-threading is.

People will see two identical laptops, one with the 2-core Intel i3, and one with the 4-core Ryzen, with similar clock speeds and similar prices. The Ryzen model might even give the illusion that it comes with an AMD graphics card even though it's integrated. That's all that matters.

Especially since AMD's APUs' integrated graphics have stayed superior to Intel's.

I think I've seen news about intel negotiating with AMD to integrate their GPU design in place of intel iGPU/IRIS.

All those consoles are bound to have some benefit

>newest Intel iGPU vs two-gen old AMD iGPU, barely keeping pace

Yes, it's still true. That should've been plainly obvious.

That Broadwell-cpu is an exception. It is the only series for the desktop where Intel produced capable integrated graphics. Btw, it is also a very strong processor.

But it was released immediately before the next cpu generation was released. It got no real marketing, and support for it was abysmal – some to many games just produced a black screen with it. Based on its performance and pricing I'd have liked to recommend it, but given its socket and its support issues that was almost never a good idea.

Normally you'd compare the A10-7870K to the integrated HD graphics of a Skylake or Kaby Lake cpu, it beats those easily.

The R7 GPU architecture is not from 2015. R7 came from 2013 with the 200-series of AMD GPUs, and was the second generation of GCN.

Not to mention the few available Intel parts with those GPUs cost an arm and a leg.

If it has a K in the part number it is by definition not mainstream.

Those are made for overclockers, enthusiasts, which are a tiny segment of the market. For every K-part Intel produces they must make six E-series Xeon chips and fifty low-power notebook ones.

The i7-7700K has only 4 cores. The comparison with the 6900K is more fair.

6900K has 40 PCIE lanes and that's raison d'etre of this chip. So it is not really fair to compare the more expensive chip and then ignore exactly that thing, that makes it expensive.

Many more people care about core count than PCI-E lanes… Core count is the probably the most important thing about a CPU.

Not if you game, develop, or anything else where single thread is more important. If core count is the most important thing about a CPU then everyone would be buying bulldozer with its "8" cores.

Bulldozer is just old at this point.

Why is single thread more important in development? Large project builds typically spawn as many compiler processes as there are CPU cores.

If you want to keep these 8 cores busy, you need I/O to feed them.

Otherwise, your cores will be either waiting for something or just getting hot running synthetic benchmarks.

What I/O are you going to do with all these PCI-E lanes? 16 for a graphics card, 4 for an NVMe SSD if you have one (still 2x as expensive as a SATA SSD at the same capacity), a couple for a network card maybe… The only reason to have ~40 lanes is multi-GPU setups, which are absolutely not necessary to keep 8 cores busy.

Where I live, NVMe and SATA drives are at the same price and many boards come with two M.2 slots (plus of course you can put additional drive with adapter into plain old PCIE slot). Network card or two, maybe thunderbolt port or two...

All the other chips have 16 PCI-E lanes, which you max out just with the graphic card.

Intel's desktop (not X99) boards have some extra lanes on the chipset, Ryzen will be a similar setup. (Also graphics cards perform fine with 8 lanes, the performance difference from 16 is tiny)

Where I live, the price of a 256GB NVMe drive == 512GB SATA drive :(

> The comparison with the 6900K is more fair.

It's also pointless if consumers will be choosing between a 7700K and the closest equivalent Ryzen part.

Gaming performance is irrelevant for top CPUs, the difference appears only in low resolutions and old games.

The "games don't use CPU" line is only really true for multiplatform games that must also run on the low performance PS4/XBox One AMD Jaguar CPUs.

If we look at PC exclusives [1] then we can see that these are extremely CPU-hungry games. This hunger only goes up if we want to achieve a framerate higher than 60, say going for 144.

I personally have an i7 @ 3.8 GHz with GTX 1060 and none of these games can hold a stable 1080p @ 144 Hz. What's more, I've benchmarked the effect of changing GPU/CPU and increased CPU power increases FPS far more than increased GPU.

I upgraded from Radeon R270X to GTX 1060. In multiplatform games this is a huge leap. Battlefield 4 (1080p Ultra) goes from 43.1 to 94.8 [2], a whopping +120% increase. While in Dota 2 (1080p Ultra) that only netted me a +11% gain. Then when I overclocked my i7 from 2.8 GHz to 3.8 GHz I got a +23% increase.


[1] For example Dota 2, H1Z1, DayZ, Civilization 6, Guild Wars 2.

[2] http://www.anandtech.com/bench/product/1043 & http://www.anandtech.com/bench/product/1771

Just curious: why do you want to run a turn based game like civ6 at 144fps? Or for that matter anything except super reaction-dependent FPSs?

You're correct that it has a significantly bigger impact in reaction-dependant games. [1] Civilization 6 was just a good example of a CPU-heavy game, regardless of whether those frames are that useful. However there's also the case of normalization. After using a high refresh rate monitor, even something as simple as moving the cursor in Windows feels laggy with lower refresh rate monitors.


[1] This being all reaction-dependant games, doesn't have to be FPS. Even fast paced pong qualifies.

Does it matter that every piece of scientific evidence shows that humans can't see anything at more than ~70hz?

This is false.

Well, I guess that decides it

The thing is, anyone with a 144 Hz monitor [1] knows from experience that what you're claiming here is complete bullshit. Then there's also the fact that 70 Hz causes demonstrably more nausea in VR users than 90 Hz or more.

Beyond that, if you wanna act innocent and play a citation game, I can throw you a bone. I'll give you this [2], what do you give me in return?


[1] This was already evident back in CRT days when 70 Hz was garbage and 100+ Hz was what every serious gamer was after.

[2] http://www.100fps.com/how_many_frames_can_humans_see.htm

Really? Your first citation is youself[1] and the second one is a random website that's clearly not scientific or published.

How about the NIH? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2826883/figure/...

Gap detection thresholds for different age ranges. Notice that the average for vision is around 20ms, or 50fps. Yeah some people are lower but the large majority of people don't see any faster than 60fps

[1] so its not actually a citation it's just annoying.

This gap detection tests pretty much the same thing as CRT flicker. It tests at which point does a light source seem continuous. This doesn't cover the full extent of human vision capabilities. What's more, if you wouldn't have dismissed my link as "random" and actually read it, you would have seen that it covers this very same case and gives pretty much the same numbers. It's under Test 2: Sensitivity to darkness.

I could explain further, but I get the feeling that you've made up your mind and aren't willing to read much. If you change your mind, start with the link I gave earlier. [1]

In addition, this thing is pretty easily testable with home equipment. Get yourself a 144 Hz or faster monitor and construct the following program with OpenGL: two identical boxes moving side-by-side, one updating its position at the full 144 Hz and another at 60 Hz. [2] You'll see the difference yourself.


[1] http://www.100fps.com/how_many_frames_can_humans_see.htm

[2] There's a web app as well, but unfortunately browsers don't support frame rates higher than 60 Hz that well. My chrome is limited to 60 Hz even on 144 Hz for example. https://www.testufo.com/#test=framerates

lol? Your CPU can absolutely bottleneck your GPU. And new games start to use hpyer threading more and more, see Battlefield 1 for example. There is a huge difference between i3/i5/i7


I can agree with this. I went from a Pentium G3258 clocked at 4.8GHz to a box-clocked i7-4790S. Battlefield 1 went from unplayable to dead smooth with a GTX750Ti.

Unless you have a 144p display then all of the high-end configs there will appear identical.

Do you mean 1440p or 144Hz? I'll assume 144Hz. First, that would be enough – gamers are flocking to those like crazy. Second, you have to really look at Multiplayer benchmarks here, not single player. In those Hyperthreading of the i7-7700K helps a lot when compared to the i5-7600K. If Ryzen can place a better model in between those two processor, that would be very nice.

144p (144 fields, progressive scan) as in 144Hz, in contrast to things like 60i (interlaced).

I know people love their 144Hz screens, but if you're generating 170FPS then the remainder is utterly wasted. You're "bottlenecking" the monitor.

So at some point you're spending money on stuff you don't need, the extra frames are simply thrown away.

You'd think, but there's an appreciable difference between i3/5/7 at the same level, even with discrete graphics.


I've been looking into a new CPU to play Overwatch because my is 4(670?)K is struggling with even high. Bottlenecks around the gpu pegging all 4 OCd cores out at 100% constantly without even hitting 144fps.

It's not just old games that need good cpus these days, though I will admit I'm probably more than just the enthusiast market.

Something is wrong with your pc. I'm running ultra gpu settings constant 70fps at 2560x1440. I've got the same cpu, not overclocked at all, and a 980GTX. I'm also running 4 of those monitors off of the gpu, one with the game, one usually with netflix at the same time and I sit at 20% cpu the whole time.

Yeah, I'm running 1080@120hz on a GTX1080 with a 2500k@4.5ghz on all cores and the GPU maxes out before the CPU. Google around a bit for OW competitive settings and you should be able to achieve something similar :)

The guy you're replying to looks like he wants 144 fps on 4K. Which is prolly not doable with current hardware.

They didn't say 4k resolution they said 4(xxx)K model CPU which is a very standard enthusiast Intel CPU.

You are correct if they're trying to do 4k 144hz they're not going to have any luck. Stick with 1080 or 1440 max resolution 144 hz gaming.

Also personally I think 144hz is a waste. I work for Twitch so I have access to and play on 144hz hardware a ton and I could easily afford it. I instead go for max screen real-estate and high end color reproduction. I'm totally fine with my 60hz gaming on my dell u2714 at home and at work where I can walk 10 feet and use nice 144hz gaming rigs I generally just play on my 30" dell instead.

Totally anecdotal but my coworker was all about the big glorious 1440 display, bought a raptor or whatever and a 1080 gpu. Then because the raptor had an issue he bought a 48" 4k tv instead. The raptor has been replaced but sits idle because having equivalent 4 screens with no border on the tv is much more useful to him.

144hz is like 3d in movies, if it doesn't distract you (glasses for movies, smoothness for games) then it's immersive. And that's the problem. Immersive things you forget about, they bleed away. Gaming is already very immersive so you're just forgeting about more things.

Go with more screen real estate if your desk will fit it. I've convinced 30+ friends to go with more monitors over better single monitors, friends who thought that 2 monitors was ridiculous, friends who thought that 3 was ridiculous. No one has ever come back and said I was wrong.

Final point: Overwatch, I believe, is capped in the engine at 60fps. This means that if you run over 60fps you're not getting additional information. I don't think they even do sub-frame interpolation, so you're likely just seeing the same frame multiple times. But best case even sub-frame interpolation is just showing you predictions for 6.49 milliseconds at a time, but these are not actually accurate, they're interpolations.

Yeah I guess I'm just trying to run it on high which is a bad move. If I kill everything then I'll get better frames I'm sure. I'll try it out. It would be nice to actually have the ability to leave chrome open while playing overwatch without killing my framerate from the CPU overwork.

Nope. Just tried and I'm still capping at around 100 fps. I don't know what's going on. It'll fluctuate wildly and still uses 100% cpu.

Sorry to say this but "Reinstall windows" is a good place to start.

How much ram do you have?

16gb ram.

It's DDR3 1600.

Reinstalling windows is not really something I want to do.

Yeah I hear you. I just got used to doing it years ago. It helps if you keep your docs on a separate drive so you only have to reinstall the OS.

I've mostly migrated things since I have a 120gb ssd that I've installed the OS on... I just don't want to haha

Yeah there's something wrong then. Because I'm just trying for 1080 144hz... I shouldn't have this issue but it KILLS my cpu. Just runs away with it.

You're running at close to 144 FPS - what FPS are you hoping to hit???

Probably going for the 144 FPS worst case (meaning frame times below 6.944ms at all times).

I'm trying to get a rock solid 144 FPS because 144hz monitor.

Many good monitors have a 144 Hz refresh rate these days.

If I were Apple, I'd jump on this. Maybe then battery life will be better. And hopefully that famed form fitted battery comes out of the woods.

Would have been an interesting validation to put the i7-2700K in there. It was nearer the 8350 and would have provided a good baseline to see that the benchmark is actually useful. As it stands, I'm a little sceptical (but also very hopeful, since I'm buying desktop hardware soon).

That does not really make sense. Why would physics simulation and prime calculation be so much slower? I would not trust those benchmarks.

If those benchmarks make use of AVX, that could be another explanation. Even without those instructions, Zen has less-powerfull FP units than Skylake. One advantage Zen might have is in mixed codes, since it has 10-issue segmented into 6 int and 4 float, while Skylake is 8-issue mixed between int and float.

This has been discussed extensively. It's very likely due to memory.

The only specs we've seen released for any of the benchmarks have shown very slow memory with high latency being used. When someone went about replicating the same timings on his Intel CPU, he saw a significant drop in those same scores. The post was on the TomsHardware forums when the first leak slipped out.

My guess would be branching behavior. The Intel branch predictor could be better for that sort of stuff.

I was weird out by some of those figure too. I thought Ryzen floating point units were smaller than intel so how come its so better in the floating point benchmark. The SSE performance being so out there compared to anyone also looks very interesting.

It could be the benchmarks are very focused on some particular kind of operation where Ryzen is very suited/unsuited for. Definitely need real app and more sophisticated benchmark. It looks to be a good processor probably miles ahead of Bulldozer in many department but what in terms of current gen intel in popular workload has to wait for March 2nd.

That might be true of the physics simulation but I think that's more likely Intel's better vector processing resources. With a prime search the branches involved ought to be a combination of the trivially easy ones that any predictor should be able to handle and the super high entropy ones that no predictor can guess. So if anything doing badly on the prime test is an indication that it was AMD's good branch predictor that was bumping up its score on the other tests.

The test also said that AMD beat out intel on SSE stuff.

Execs at Intel ain't gonna be too worried, they probably have signed contracts that lock partners up for many years to come

In a world where progress has slowed down, being two to three years behind Intel means you're practically interchangeable. This chip isn't an abnormality, it's a sign of things to come, and there's no way it isn't keeping Intel's execs up at night.

Amd is prepping a 32 core server beast. Considering Intel just launched a 24 core chip costing 9000 dollars, and considering the AMD IPC is looking good, Intel execs must be gnawing down to their elbows at the prospect of AMD walking in to that market at 50% pricing deltas. Over to GloFo though on process. That appears to be the a possible achilles heel here for AMD.

> Intel execs must be gnawing down to their elbows at the prospect of AMD walking in to that market at 50% pricing deltas.

That's not how it works. If they think AMD can make a dent, they'll mark their chips down to be competitive (for whatever definition of competitive they use, which is probably supported by data, though it might not be YOUR definition). Poor Intel will have to deal with only 100% profit instead of the 300% or so they've become accustomed to.

Intel got a knock out from AMD some 15 years ago. Enough people working at Intel still remember it will enough. This aspect of history is very unlikely to repeat or even to rhyme.

32 core beast makes me think of a chip the size of a drink mat. Presumably it does fit in the AM4 socket though.

AMD isn't limited to GloFo. There are rumors they're utilizing Samsung, as well, since it's also 14nm. And they could pivot to TSMC for 10nm, once Fab 15b is operational.

GloFo licenses 14nm from Samsung and there's rumors amd might be prepping to use tsmc.

Intel's ultra-top-end chips are priced to the moon and then some. If AMD can nibble away at that market they're going to do quite well.

The trouble is it looks like they nearly folded out of the market completely, so they've got a lot of trust to rebuild.

not wasting a lot of silicon space on an integrated graphics solution seems to be a big win in terms of increasing the effective power of the CPU without increasing the price as much since in the end the die will likely be smaller.

I am surprised Intel has never come up with some more gaming oriented i7s with more cores and no integrated graphics as pretty much no gamer would run without a discrete video card anyways, but then again without any competition from AMD it probably wasn't worth the engineering effort or the risk of cannibalizing their Xeon offerings.

Not a CPU expert here, but isn't this a comparison of AMD next gen to Intel current gen? Won't Intel have a new CPU out, too, when the Ryzen is actually released?

These release in less than two weeks. Intel's "current" gen released less than a month ago.

Seems pretty fair to me.

Ryzen becomes current gen in 11 days (March 2nd release.)

Intel would need to update their "current gen" (Skylake, Kaby Lake) CPUs with 8-core models to be competitive but for some reason Skylake and Kaby Lake have been stuck with a max of 4 cores... which is why these benchmarks are compared to an 8 core Broadwell.

The current roadmap suggest Intel will get YET another 14nm tweaks called Coffeelake on desktop, which is basically the same arch as Skylake. And Next year it will be Cannonlake, which is 10nm of Skylake.

Intel has been pretty much milking the Desktop Market for as long as they could.

Ryzen is not "next gen" anymore. It will be released in less than two weeks. Not enough time for Intel to come up with anything groundbreaking.

Not really. All Intel has are Ticks. Incremental upgrades for the next few years at least.

Intel's new CPUs tend to be 5% faster at the same price so Ryzen should be competitive with Skylake-X.

One of the best thing about Zen is that it "should" substantially changes the dynamic of Cloud / VPS hosting.

High Memory Instances are Cheaper due to support of 8 Memory Channels. Lower End Instances could be cheaper due to Lower Cost per Core Count.

Any thoughts on the huge discrepancy in SSE performance? Something seems wrong there.

As mentioned elsewhere, these chips only have a 128-bit SIMD unit (either for power or for space savings?) and so 256-bit SIMD executes half as fast as it would on Intel.

But if I read the picture properly, AMD appears to be 8 times(!) faster than anything else:


That seems to be untypically good, in the market where the new generations brought typically 10-20%.

I still wouldn't say it's impossible, as, for example, Intel had (apparently until Sandy Bridge) much slower NaN handling than AMD, at least without SSE2. So maybe AMD discovered some weak point like that. But until I read some explanation, it does seem to be too good...

Ah. Yeah, that looks impossible.

Looking forward to the day when reviews include HPCG (the replacement for LINPACK) as a benchmark, with a little animation showing how hot the different bits of the core run.

Should be soon.

HPCG is a memory bound application so I doubt it runs the system as hot as HPL (high performance linpack)

Looks like my next desktop build can be all AMD. With radeonsi/radv catching up and upcoming Vega and Ryzen, it looks quite attractive.

I'd really like to see what AMD can do in the ultrabook and notebook segment. Good mobile CPUs with a good integrated GPU could really be a killer feature. I don't know whether the desktop segment is enough for AMD to reboot its cpu brand (although the server market is a more interesting beast).

I can't help but feel AMD is shooting the puck where the goal was, but won't be in the next few years. We desparately need GPU-like devices for advanced ML/AI. Even intel knows this and is investing huge in that area to be competitive against NVIDIA[1].

I predict in the future that AMD will come to dominate the home/enthusiasts CPU market, and intel's low-power CPUs will dominate enterprise along with whoever comes out with a CUDA competitor.

1. https://www.google.com/amp/s/www.fool.com/amp/investing/2017...

AMD already do that, but no it's not like ML/AI is the main market right now. Nowhere near.

I don't know why I get the feeling that most of this advancement is the result of Jim Keller's contribution. Surely, AMD must have other engineers capable of similar achievements, right?

Fingers crossed that this is not a paper release. If AMD can deliver and I can actually buy, I will, to keep the x86 market competitive.

Sadly for those of us involved with floating point stuff, Intel is still king (and Nvidia is still the Queen).

AMD has been really smart. Freesync is a huge success. It's outperforming Nvidia by about 20% on Vulkan and DX12 on similar hardware because game makers can use their optimized code from the AMD-equipped consoles. On Ryzen, it appears strategically to have reduced the silicon usage from AVX in order to improve mainstream performance and increase cache, while maintaining a smaller (and therefore cheaper) die than Intel's increasingly bloated silicon. It all points to a company, back to the wall, really fighting its corner, optimizing its much smaller resources. If Ryzen and Vega work out as expected, I hope the people responsible at AMD are in for big rewards. BTW the stock has ramped 200% in the past year.

To get an idea of how smart AMD people are, this video instructive:


AMD last decade has been so stressful even from the public POV, I'm surprised they managed to survive up to Zen.

If ryzen is enough of a success this will be heaven for the company.

I wish them all the best.

What about Win7 drivers? Even if the Redmond based company doesn't like marketing about it (an understatement) - please support it. (Recently AMD announced Win7 support only to remove the announcement after after a few days.)

Even Intel supports Win7 with their most recent CPU as well, even if it's not officially marketed, and one has to search around for the driver.

Call me selfish but I think it is more important for them to contribute to GNU/Linux and *bsd projects.

I know Microsoft has a contractual obligation to support 7 for a while but I think there's next to zero chance they will reset Windows 10 and go back to Windows 7. You will need to get away from Windows 7 at some point. Better start making preparations now.

I'm not sure how much you are following, but AMD is doing quite well on Linux/*BSDs (at least regarding graphics, but they haven't had any interesting CPUs for a while). Mesa 17 has just hit OpenGL 4.5 compatibility on last 4 generations of Radeons [1].

Radv, community-developed Vulkan driver supporting last 3 generations of Radeons, supports vkQuake, Dota 2, and The Talos Principle, which is pretty much everything there is on Linux [2]. Playing Doom 2016 over Wine using Vulkan is also working if one uses airlied's branch [3]. Performance is not on par with Windows right now, but I expect it to improve over time.

[1] https://www.mesa3d.org/relnotes/17.0.0.html [2] https://en.wikipedia.org/wiki/List_of_games_with_Vulkan_supp... [3] https://github.com/airlied/mesa/tree/radv-wip-doom-wine

At the moment Windows 7 is number one desktop/notebook OS, it dwarfs all other operating systems (all other Windows versions incl Win8 & Win10, macOS and Linux combined)!

Win7 is superb, looks nice and works rock solid and is supported until 2020 or beyond (see XP). And then in 2020s Fuchsia/Android, or what ever OS is available and make sense for end consumer needs. (Of course *nix driver support is important too, AMD knows that)

It seems like your data is outdated. According to Statcounter win 7 have 47.5% of the total windows market, you have to go back almost one year for a point when 7 had more than 50%. Netmarketshare says win 7 have 47.2% of the total market.

What is Windows 7 share on new devices? The total installed base numbers are meaningless here.

Windows 7 has less than 3 years of life left on it before Microsoft stops doing security patches. At that point the OS's vulnerabilities will never be patched, and no one will be able to maintain it, akin to XP, and in April we'll see Vista join XP in being unsupported.

If you want a platform you can ride ad infinium, Microsoft is the wrong boat to be in. We're using OrangePi boards for everything, since you can build Debian for them with a fully libre stack and no closed firmware.

Do you mind explaining why you prefer Windows 7 over 10?

I'm not OP but I'm using Windows 7 for playing Steam games which do not work on Linux natively or using Wine. My reasons for avoiding Windows 10, aside from having everything I care about on Windows 7 already, are described well by EFF [1].

None of the games I care to play are DX12 and Windows Store exclusive, so using Windows 7 has worked well so far.

[1] https://www.eff.org/deeplinks/2016/08/windows-10-microsoft-b...

Grab a copy of Win10 LTSB-N and get rid of Win7. It has all that annoying crap stripped out.

You should be able to find a copy fairly easily floating around here and there.

Thanks for mentioning this, I didn't know the Enterprise version had all that removed. Do you know if it still sends data to Microsoft or not?

You can always 'unfuck' a standard edition: https://github.com/dfkt/win10-unfuck

It lists two other repos at the bottom for privacy and de-bloat. I version froze my only Win10 VMs, so not sure if the projects are current for post-Anniversary Update.

What is this magic you speak of? Know of anywhere to find out more? I'd kill for a suitable equivalency to win7

To quote Wikipedia (https://en.wikipedia.org/wiki/Windows_10_editions):

> Windows 10 Enterprise Long Term Servicing Branch (LTSB) is similar to Windows 10 Enterprise but does not include Cortana, Windows Store, the Edge browser, Photo Viewer and the UWP version of Calculator (replaced by classic version), and will not receive any feature updates, gives companies more control over the update process. Windows 10 Enterprise N LTSB also lacks the same components absent in other N variants (see below), and it is the most stripped down edition of Windows 10 available.

Given that this is Enterprise stuff, you won't get it preinstalled on retail laptops/PCs and buying it separately is impossible - you only get it with a Volume License contract or a MSDN subscription (IIRC 1.1k $ per year).

The "-N" suffix is there since (IIRC) Win XP, it is a version of Windows without bundled Media Player and other stuff.

Edit: According to https://www.howtogeek.com/273824/windows-10-without-the-cruf... one could get Win 10 Enterprise as a $7/month subscription and via this also Win 10 Enterprise LTSB? Does anyone have more details on how and if this works? I'd GLADLY pay $7/month for this, even more.

I think the 7$/month option is only for managed service providers to resell to their customers, you can't get a single instance of it directly from MS.

If they are allowed to offer it to private customers and if any do I have no idea.

Ahh that's a bummer.

doesn't that require an enterprise license?

I think they were implying piracy or something

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact