Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen 3000 announced (anandtech.com)
657 points by DuskStar 24 days ago | hide | past | web | favorite | 339 comments

No mention of it, but the pressure should still be on AMD to open source or allow firmware disable of their Platform "Security" Processor.

I would really like to be more enthusiastic for my next build to use something like this, but all my computers are presently trustable in a way new platforms with proprietary coprocessors that haven't seen me_cleaner support cannot achieve.

It really sucks to give Intel money - its not like they support the me cleaner project and are actively antagonistic to third parties disabling their backdoors - but at some point it stops being a matter of principle and becomes one of practicality. I can disable the unwanted parts of the hardware on one platform and not on the other.

AMD's PSP is ARM TrustZone, there is no way AMD could open it in their current chips, they don't own the IP and ARM is vehemently opposed. Due to the outcry, they are more likely to build their own secure enclave/supervisor processor in the next major rework of Zen, which they would own the IP of.

There are open-source TrustZone implementations (OP-TEE).

AMD could drop Arm and move to a RISC-V based secure enclave. Google is developing OpenTitan as open hardware based on RISC-V.

That would be good. Opposed ARM should get lost.

Have you considered using the POWER9-based Raptor Blackbird uATX board for your next build? The firmware is open, and they advertise it as a feature:


The Raptor stuff is pretty cool, but the cost puts it well out of practical reach for most of the people calling for opening the PSP. The cheapest Blackboard+CPU combo is $1279.

Does Intel let you disable PSP?

Meanwhile, AMD seemed to in the past[0].

0: https://www.phoronix.com/scan.php?page=news_item&px=AMD-PSP-...

They don't have the choice on older chipsets that ME Cleaner has been reverse engineered to work with.

Hence why its practicality. Neither respects user freedom, but the community has reverse engineered the ability to disable the backdoor of one company and not the other.

Is there a Xeon E3 mATX motherboard (C236 or C224 chipset) which supports me_cleaner and has 8 SAS ports? Preferably without IPMI.

I'm really looking forward to this. But one issue with current AMD cpu is that you have to buy a desktop GPU even when you are not doing any kind of gaming and such. I know intel iGPU haven't been terribly good but for work they are good enough and is one less part and cheaper to boot. For same performance a Ryzen 3rd gen + gpu might still be cheaper but the price advantage gets reduced.

I haven't really seen that mentioned much I wonder why is that. I do love the potential for Zen2 + 7nm. The 65w of 3700x and the high frequency of the 3900X both suggest interesting potential for the future. One could end up seeing that the Ryzen 5, six cores might have higher overclocking headroom.

Then there's of course Navi, the first new GPU core in a long long time.

>But one issue with current AMD cpu is that you have to buy a desktop GPU even when you are not doing any kind of gaming and such.

AMD just aren't selling many CPUs to business desktop system integrators, partly due to dubious tactics by Intel to keep them out of the market. If you sell most of your CPUs to enthusiasts, it just doesn't make sense to squander die area on a crappy iGPU. Gamers obviously want a fast GPU, but so do most creative professionals - Photoshop is heavily GPU accelerated, as is Premiere and Resolve, not to mention essentially all 3D modelling and CAD packages. Scientific computing is also rapidly moving towards the GPU. GPU performance has a surprisingly large impact on day-to-day responsiveness, because all the major browsers use GPU compositing.

The market for fast chips with crappy iGPUs just isn't as big as it used to be, nor is it particularly accessible to AMD. The Athlon and Ryzen APUs make a great deal of sense for the current market, offering a good balance of performance between CPU and GPU. I expect to see 6 and 8 core Ryzen chips with Vega GPU cores as part of the Ryzen 3000 generation, which will further close the gap.

> AMD just aren't selling many CPUs to business desktop system integrators, partly due to dubious tactics by Intel to keep them out of the market. If you sell most of your CPUs to enthusiasts, it just doesn't make sense to squander die area on a crappy iGPU.

I'm a developer, and I want fast build times. I don't need a dedicated GPU for that.

Right now I'm squandering money and power on a dedicated GPU which is probably idling at 0.0000001% rendering a composited 2D desktop in its sleep.

If you really need a mountain of CPU power, then your stance toward the GPU should be that you're glad the several Watts it requires aren't being emitted under the same heatsink as the CPU you're relying on, and that it isn't wasting memory bandwidth you could put to better use.

You can buy a used GPU for well under $100. Now you can drive multiple 4k screens for that HiDPI terminal goodness and stop choking your CPU's memory bandwidth :)

Still a better deal than Intel and a superior desktop experience.

My terminal window has a background blur effect and I like my desktop switching animations to maintain 144fps.

Implying devs don't use dedicated graphics cards...

What desktop environment do you use that has buttery smooth 144fps animations? Gnome certainly isn't smooth.

And who can read a terminal at 144fps anyway? I seriously couldn't care less about eye-candy, animations and fps.

Then again, I'm an i3/sway-user, so I guess I don't exactly represent the average (Linux) user.

No need for animations or anything like that, 144hz also means less input lag and tearing.

I feel the same way and wish all CPUs had at least some rudimentary video output capability. I've always appreciated Intel for that. They've taken a mobile-first / mobile->desktop strategy, and AMD adopted a server-first / server->desktop strategy. Intel's desktop CPUs are a bit of an afterthought, and AMD's mobile (APU) is a bit of an afterthought.

For AMD, what they've done makes the most sense, as fighting Intel in the mobile space is the toughest market to break into. You can get a cheap Geforce 1050 for $130 or so, and with the perk of having great OpenGL/DX drivers to keep anything that you do end up using it for, nice and snappy. I'm in your same boat and use a 2700X and a 1060.

AMD should really have their motherboard vendors add some sort of basic functionality, like the old IGPs.

You could always get a reburb $15 Radeon from a few years ago if you really don't care about 3d performance.

I also considered going with a USB-DisplayPort adapter, but it was more expensive and I wasn't sure how well it would work.

As far as I'm aware, a USB-Displayport adapter will make use of the system's CPU more than a cheap discrete add in card would, factor in the custom drivers that one would have to install (both on Linux and OS X) and for me that would be a non-starter as my sole video output

Same here. I recently did a Ryzen 7 build with a Thin-ITX board.


I have a separate gaming machine, and would have rather just used the integrated for my Linux machine. It doesn't look like AMD is refreshing their APU lineup at all in this release. Did I miss it, or are there no APUs in the list?

The APU/mobile chips typically come later. Raven Ridge chips were released after their desktop counterparts, and the Zen+ (12nm) 3000U series mobile chips were announced only a few months ago.

AMD also wants to get more into servers, supercomputing, AI, and cloud. That's mostly Epyc but Ryzen is fast enough to play there and integrated GPUs dont matter there either.

That is probably true, but if I am buying a GPU for hundreds of dollars I don't care as much about the cost of the CPU. I think AMD needs to do something if they want to fulfill the potential of Ryzen. (Removing or minimizing the role of the chipset might be interesting for example).

Not necessarily. For instance if your target is 1080p gaming you can do that on a budget with an RX 570 or RX 580 + a reasonably priced Ryzen 7 chip. This market isn't at all competitive for Intel because it's price conscious and iGPUs aren't anywhere near up to to task.

The only market that doesn't care about CPU price is the market that doesn't care about price at all. i.e people building systems with i9-9900K or X299 platform + RTX 2080 TIs.

I am just not sure it is a super strong market. As far as I can tell those cards have been selling for roughly the same price or more as when they were introduced 2 years ago. A lot of people would just stick with what they got, or get a laptop instead. If the price of GPUs and memory was half of what it is it would make a lot more sense.

Maybe offer a very cheap add on gpu for office use. Something in the igpu range that would sell for say 50usd. It could be based on one of AMDs older gpus but with support for new display types and ports.

Right now, the cheapest "new" video card sold directly by Newegg is a $35 NVidia 210 which was new in 2010. Interestingly, reviews from that time period suggest that it sold for $30 after rebate. It has HDMI, DVI and VGA ports.

In the $50 range you can get an R7 350, which promises 4K support (although I suspect that will be at 30Hz) and would certainly be enough to light a monitor with enough oomph to put shadows and transparency effects on your windows.

> In the $50 range you can get an R7 350, which promises 4K support (although I suspect that will be at 30Hz)

It supports 4k@60. It looks like everything with a GCN core does, so most 7xxx, most 2xx, and all 3xx.

I think the older Radeons only support 4k@60 over DisplayPort though, HDMI support for it is relatively new.

Fine Wine :)

If you want a discrete GPU with the same featureset as a modern integrated Intel GPU - with basic desktop features such as 1) being able to decode modern video codecs like HEVC 2) prime Windows and (especially) Linux support or 3) modern video outputs such as HDMI 2.0 - you need to buy a Radeon RX550. This will still set you back more than $80 and add another fan to your build.

A Radeon R7 240 from 6 years ago will still set you back $50, but will not give you modern video outputs or video codecs (although it at least still has relatively prime driver support). It's probably even slower than Intel's current integrated graphics too. Might as well go for the upsell then.

The prevalence of internal GPUs unfortunately seems to have killed the market for up-to-date very low end discrete GPUs. The "budget stuff" starts at $80, which is quite steep for something that barely has added value over an integrated GPU.

Even the newest AMD Gpus don't have full VP9 (Youtube) acceleration.

Technically AMD's APUs (the Ryzen 2400G and other CPUs with integrated GPU of the Vega generation) do support it, but for some reason AMD has never enabled this in their discrete cards. Neither have they released a low end Vega discrete card either. They have the tech, but don't seem to be interested much in the low end.

> But one issue with current AMD cpu is that you have to buy a desktop GPU even when you are not doing any kind of gaming and such.

What? No you don't. They literally have an entire line dedicated to the exact use case you mentioned (CPU w/ iGPU for business use):


You could pick a Ryzen with "G", like the Ryzen 5 2400G, though... They have an integrated GPU.

Unfortunately there were no desktop G parts (APUs) on the Zen+ architecture (as far as I know.) And they're not in the first round of Zen 2 either.

The 2400G, which I think is currently the top desktop APU, launched in Feb. 2018 and has four cores.

The Zen+ APUs should be out soonish. Can find plenty of information about the Ryzen 3 3200G and 3400G, but no release date :P

Nothing stops integrators from using the 3xxxU parts and the old 2xxxG parts.

Regardless, I do expect G parts will be announced soon, too. 3 CPUs definitely isn't a full lineup.

2xxx series is zen+, also ops complaint was about current gen ryzen, which the 2200G and 2400G fall in to.

Those APUs are a weird mix between Zen and Zen+, they got improvements over Zen but they are not fully Zen+, and they even have some unique drawbacks (the TIM, they are not soldered).

Complaining about TIM on an entry-level processor seems... questionable.

I was just highlighting that there are differences.

Though that temps are higher because of that, resulting in more work for the cooler, is not ideal for a CPU that's apart from that great for a HTPC.

2000 series is not zen+ when it comes to APUs. 3000 series is zen+ in APUs.

Ah, interesting. Thanks for the clarification.

four cores eight threads

They are limited to 4 cores. In the new lineup too, when G series comes out later, they'll be limited to 8 cores (one cliplet for CPU and one for GPU).

With this CPu chiplet design they couldn't have 2 8 core packages and a GPU on a single die.

For the vast majority of professional use cases 8 core / 16 thread running at 4.5+ boost is going to be way enough for a while. AMD sure has spoiled the market in just two years on commoditizing 8 core desktop chips because in 2016 the conversation would be about what flavor of $400+ quadcore or $800+ sextacore from Intel you were going to get.

Not true. 3000 series APUs are based on zen+.

Read 'later' as 'presumably labeled with a misleading 4xxx'.

I know of the G series and I know why they are usually a GEN behind specially given the quality of discrete graphics AMD has and they need to sale. But for work, for just productivity a lot of the savings washes up with the GPU. It used to be that you get super lowend graphics card for those workload but these days almost the lowest price GPU is ~90.

Just as an FYI for anyone looking for cheap basic display adapters, you can basically get an infinite supply of refurbed/pulled Quadro NVS 295 cards on eBay for $5-10.



Hardly the latest technology but you can plug a monitor into it.

No DisplayPort and it almost certainly doesn’t support 4k at 60Hz, both of which the Intel chipset supports and both of which I’d consider essential for high-end productivity use these days.

I still don't get how people are making use of 4k outside MacOS. Windows and Linux Desktop environments require tons of massaging to get scaling right, even with flagship software (some Adobe programs don't scale properly).

Here we are, 8 years after Apple started shipping "Retina" displays, and PC software hasn't caught up. It's embarrassing.

I use Ubuntu on a 13" 4k laptop with 2x scaling and I've never had any issues.

ah the OG king of HTPC/Media PC Graphics card. I used to have one in a HTPC paired with an Intel Q6600

Wow, that was a powerful (hot) CPU for an HTPC!

You can get a cheap passive gpu, most don't even need power from the PSU. Something like a nvidia GT 710 or 730.

Is there anything similar but from AMD? As a Linux user I'd rather not buy nVidia.

I've been defining specs for a new NAS/home server and was just waiting for this release to finish it. I have that issue and my conclusion has been that you either get an old Nvidia for ~30-40$ (e.g., GT710) or all the AMD options I've seen are ~100$. That generation of card seems to be well supported[1] by the nouveau open-source drivers at least but I'd much prefer AMD as well. For NAS and server applications it would be nice if someone did a motherboard like the Asus X370Pro with an old GPU soldered on. That way you could have simple boot graphics without this hassle and only add a good GPU if you actually needed it for something. But I guess that's too much of a niche to bother with just like low-end graphics cards.

Edit: Found a few AMD R5 230 for the same 30-40$ range. Assuming the drivers are also good that seems like a good option. Edit2: Researching some more the R7 240 has a similar price and is probably the first that is already supported by the new amdgpu drivers so may be a better bet.

[1] NVE0 here: https://nouveau.freedesktop.org/wiki/FeatureMatrix/ Seems like everything but power-management is fully supported. Need to do some more digging but if the incomplete power-management means it just doesn't throttle up as much that's fine for the application.

Something like the R5 230 or RX 460, several generations old, but will do.

Unfortunately the market with low power AMD cards is non-existent. Maybe some older HD model if you can find it second-hand, but those can't do 4K@60Hz. There is no low powered (=passive cooling) card based on the recent AMD chips. I wanted GPU with only passive cooling for my Xeon machine, had to buy lowend NVIDIA. Hopefully Navi at the end of the year will finally change this.

What are you talking about? you can get RX550s that are passive or even one of the rare RX560s which are passively cooled

I can't find a single passively cooled RX550, albeit some are passive up to a certain temperature, which is likely acceptable for the usecase assuming it works as advertised.

worst case scenario you can always buy an aftermarket passive cooler and swap the cooler yourself

Passively cooled R5 230s are old but readily available. The newest passively cooled AMD card I'm aware of is the XFX RX460 but it's hard to find.

The RX550 is low power enough to not need a PSU connector but for some reason nobody has made a passively cooled version.

Mobo manufacturers could solder some weak GPU on board, but nobody seems to be interested so I guess they did their homework and decided it didn't make any economical sense.

Yep. Intel iGPUs have good open source Linux drivers too. I had hoped that AMD would have some basic graphics in the IO die, but it looks like that won't happen.

The G-line of CPUs with integrated graphics is also interesting to me as well! As system ram gets quicker (it's my understanding that the frame buffer for these integrated GPUs comes from system ram) and nanometer tech gets better that a competent integrated GPU+CPU chip will be able to play AAA games at 1080P with some of the visuals turned up. Though the conspiracy theorist in me thinks that the console makers would never let that happen as it would mean DIY PC gamers would be able to build < 500 USD machines that could be play all the latest games.

Since console manufacturers seem to have been favouring AMD for graphics lately, this could actually be the reason for a lack of AMD iGPUs

There's always the 2200G and the 2400G, I'd expect those to get refreshed within the next year too.

2200G and 2400G from current gen ryzen have integrated graphics though.

They use the same chip for desktop and for server market where it doesn't make much sense to add a crappy iGPU. Intel with their scale could build much more different chips.

When I put together my computer a year or two ago, I just added some cheap $40 radeon board. It really didn't impact the bottom line much.

It's my understanding that Navi is just GCN but tweaked with faster memory.

TFA seems to state that it's a new architecture, although how different it is from GCN depends on how much salt you take with marketing material. They did say it has redesigned compute units; that seems to point in the direction of "not just tweaks".

It's not, it's "RDNA".

Thanks for clarifying. I read the article. I’m excited to see benchmarks.

The 2400G existed. It's a lower end chip, but has the GPU you want.

At close to 300 comments, I am surprised there is no mention of what I thought was the most important and surprise, 32 to 70MB L3 Cache. A lot of people focus on Core and Thread as well as IPC. We already knew what improvement IPC could do, we already knew what we could do 32 Thread. None of these are really new.

But 64MB of L3 Cache? In a consumer CPU at a price that I would hardly call expensive ( I would even go on to call it a bargain ). We used to talk about performance enhancements and cache miss, we now have 64MB to mess with, we could have the whole languages VM living in Cache!

>we could have the whole languages VM living in Cache

When dual-core processors came out someone said you could now have one core run your stuff and another run the anti-virus. That was widely joked about. This feels a little close to that. Having more CPU cache than we recently had RAM ending up being used for programming language overhead.

Agreed. For those of us using VMs, the extra cache in each package is enough for the working set of the systemd we are obligd to run in each VM.

Looking back to the 8M total RAM I had on a Mac SE/30, running A/UX Unix and a MacOS GUI comfortably, the sloppiness of modern productions is a disgrace and an embarrassment.

What galls is not the wasteful extravagance. It's the failure of imagination that makes such meager, pitiable use of such extravagance. We make, of titanium airframes and turbojet engines, oxcarts.

Agree with both the comment above, but do we really have a solution?

It is a trade off between Cross Platform, time to market and development resources. And unlike any other scientific and engineering industry, Software Development doesn't even agree on a few industry standards, instead everything is hyped up every 2 years and something new come around and becomes a new "standard". And we keep wasting resources reinventing the flat tire.

This! I usually run code where I'm cache limited and adding more processes/threads slows down everything! At 64 MB of L3 cache, I'd be able to run things way faster[0] on image processing tasks!

[0] for the curious reader, I've two machines: a 3MB-L3 i5 and a 20MB-L3 Xeon. So id't be looking at 20x and 3x improvements -- without taking into account other architectural improvemnents, like not-underclocked-AVX2, and the GHz count

Thanks for the heads-up - I totally missed this "detail" when I skimmed through the article & specs table earlier today.

And just in case that somebody missed it, here is again the link to the PDF "What Every Programmer Should Know About Memory" by Ulrich Drepper...


...which was posted here sometimes earlier this year and which talks about every detail of RAM and L1/2/3 cache access times and architectures etc... . Very heavy for me (read so far 25%) but as well very interesting.

This, people tend to forget what huge difference a few mb of cache can make. I'd prefer 1Mb more of cache over half a GHz of frequency.

This is a bit pedantic but capitalization matters in units. I think you mean 1MB instead of 1Mb (byte not bit). And "mb" would be a milli-bit or a thousandth of a bit, which is a unit that probably makes sense in some situations (e.g., estimating information content of a message).

(There's also the extra complexity around MiB vs MB for base 2 vs base 10 prefixes. In this case it would actually be MiB as RAM and cache is normally base 2 sized. But not everyone uses that, relying on convention from context instead.)

Yup. Just waiting on the benchmarks from independent reviewers and to see how XFR in this generation works but I’ll be getting a Ryzen 9 if everything checks out. 24 threads will be amazing for local development (microk8s, and others) when I’m not gaming and save me from having to build a separate box.

ball parking the build in my head: - Ryzen 9 or Ryzen 7 - 32GB ddr4 - RTX 2070 or equivalent Navi for games* (depending on benchmarks and if I decide to actually do anything with CUDA) - everything else I have, NVMe system drive, case, PSU etc * Vulkan is big in the games I am targeting: Rage2, Doom Eternal

What will you be doing that will actually use 24 threads?

A common developer scenario can involve a few things that will eat a lot of threads:

1. playing some background music 2. running a local database 3. running a local webserver 4. running a browser 5. running an ide 6. running all of that stuff concurrently while testing the backend code 7. doing builds which are multi threaded in near linear speedup fashion in many languages/environments.

I don't know if 12 cores 24 logical is going to make that scenario feel overall better than 4 cores 8 logical, but I do know that 4x8 feels much much better than 2x4 in my own use cases.

#7 alone can be a really, really big win for long compiling projects.

Yeah, my docker setup alone runs a ton of processes. Aside from docker running a copy of my stuff, I often have tests auto-running, a separate REPL to try stuff out in, my editor, slack, music, browser. It all adds up and a bunch of cores/threads definitely makes everything run more smoothly.

Common developer scenario: make -j24

This. But to the OP's main criticism: I could very well do all of that with 16-threads and 8 real cores. I do a lot of work currently with distributed databases and I believe I need the cores for local testing along with everything else that gameswithgo mentioned.

Run electron apps.

Implying that there's a computer in existence that runs those fast.

Awesome, I just realised that the old "run Crysis" joke has been replaced.

Electron apps do manage to make a mockery of my PC's specs though.

I know this is a joke but aren't Electron apps single threaded by nature and usually memory hogs? So more memory would be better and having many cores is good too.

Electron apps have Helper processes. e.g. I currently have 3 Microsoft Teams Helper processes (with 37, 36, and 16 threads) running and 3 Spotify Helper processes (with 16, 9, and 4 threads).

Idle threads don't count.

Parent was asked about threads, not memory. Of course he also has 1 TB to keep those Electron apps happy.

Funny, never heard this joke on HN before.

make -j24

Be sure to have enough RAM, so you will not run out of it.

(For some reason, when building projects like LLVM with -j 16, 32 GB w/o swap may be not enough. With -j 2, it is enough, but takes an eternity).

If you build LLVM often, use the shared library build (BUILD_SHARED_LIBS=true). Most of the memory usage (and a large part of the time in incremental builds) comes from linking final artifacts if you do not use shared libraries.

Thanks for the tip, unfortunately, when I'm building it (occasionally), the LLVM itself is part of another project (AMDVLK) and changes too.

Indeed, if you're bottlenecked on RAM, memory bandwidth itself will also be an issue (it seems to be the foremost bottleneck in modern compute, outside of single-threaded workloads. Part of why C-like languages are again becoming popular these days - they economize on memory-bandwidth per core). Might want to skip Ryzen 9 altogether and wait until the Threadripper parts are announced.

Yes, building Ceph with less than 24 GB ram on a 16 core machine will make it run into Out Of Memory situations here, so that the OOM-Killer is summoned and kills the build..

Make sure to install and use ccache and give it a big cache size on a fast ssd (I use 96GiB). It speeds up rebuilds (after the cache is populated) quite significantly.

The initial plan is for a 32GB (2x 16GB sticks) with plans to move to 64GB when possible.

Having 24 threads doesn't mean that -j24 is the sweet spot. I have a machine 56 thread Xenon machine for building git.git, and I find that with its sockets/threads-per-core/threads-per-socket of 2/2/14 the sweet spot is closer to sockets*threads-per-socket = -j28.

Things speed up rather linearly up to -j28, but once I get past -j28 (say -j32) it levels off, and -j56 starts being counterproductive.

Same thing with the 160 thread POWER8 machine I have access to. That one runs 8 threads per core, and CPU-limited -jN tops out at around -j20.

All of this is very worflow and CPU specific, but generally speaking don't blindly trust what things like "htop" show you as the number of available CPUs, under the hood many of them aren't "real".

And on architectures like EPYC you run into NUMA issues and having to pin things to certain cores, right?

The OS will attempt to schedule tasks as close to their memory as possible. Pinning tasks to specific cores may be needed in certain workloads, but for loosely-coupled parallel tasks like compiling code, you'll do fine letting the OS do its thing.

Yes, I wanted to keep it short, but you can get pathological performance on such systems unless you carefully use the likes of taskset(1) or numactl(1) to smartly pin certain classes of jobs to a given CPU. This is a decent reference: https://www.glennklockwood.com/hpc-howtos/process-affinity.h...

make launches separate processes, so NUMA should not be an issue, there's no interprocess communication.

Generally NUMA doesn't matter for "make" workloads so this as all academic, but that being said no, this isn't how modern CPUs work.

When you start two unrelated processes one after the other (such as multiple compile steps) that operate on some of the same in-memory assets (files, things being sent over a pipe etc.) they're not just going to be in the main RAM, but also L1-3 cache, and the RAM itself may be segmented under the hood (even if it's presented to you as one logical address space).

Thus you can benefit from pinning certain groups of tasks to a given CPU/memory space if you know the caches can be re-used without the OS having to transfer the memory to another CPU's purview, or re-populate the relevant caches from RAM.

Do you have the memory and IO bandwidth to back that up?

I regularly max out 8 as a developer and could certainly make use of 16 or 24, and I am probably toward the moderate end of developer needs.

Examples: multiple VMs, big editors/IDEs, local databases, local k8s clusters, local network simulators, and dont even start with AI or big analytics stuff.

Develop multi-threaded scientific applications?

Do you work on a desktop machine? At home?

Whenever I'm having a remote day yes. Some projects can take literally hours to fully recompile. The more cores, the better.

Will also be getting a Ryzen if the singlecore benchmarks show it's reasonable. Bunch of games I play at home tend to absolutely trash single core perf.

Why do you "fully recompile"? Use Google Bazel and never fully recompile anything.

Because debugging and making changes. Trying to get some centralized server to produce just the right combo of Windows sdk and msvc (Just an example) would take even more time.

And then there is the hassle of setting it up for all the possible projects one might work on.


Just some examples. Instead of developing electron application think of making changes to chromium. Instead of depeloping Qt application think of developing Qt itself. Etc etc.

Even so, with a proper build system you shouldn't need to recompile. That said, I don't think Windows has what I'd call a proper build system.

One needs to recompile with a brand new checkout.

Most of the time buildsystems work. Including one in Visual Studio. But I have never encountered a system that would always work flawlessly. From time to time one gets things like changes not detected or something else going haywire and one has to do make clean and just redo the whole thing.

Another thing is when your'e doing profile guided builds then one has to do a full rebuild after each profiling run.

Not with Bazel (on Linux or Mac at least). It builds everything incrementally. If inputs that go into the build node did not change, they won't be rebuilt. You almost _never_ fully recompile, and with a cache incremental builds almost never take more than a couple of minutes. Google builds everything it runs from source, including things like compilers, runtimes, stdlibs, etc. Imagine if every engineer had to rebuild everything from the kernel up from source on every check-out. That wouldn't work.

Uhm, yes? What else would you do work on at home? On a laptop, with its shitty position for the head, back and the hands? I do have a laptop but that's purely for entertainment - my work is done 100% on a desktop, both at home and in the office.

Pretty much everyone I know uses laptops at home which is why I was curious. I've always wanted to try out a home PC build for work at home w/ linux. But it's just easier to keep using my work Macbook Pro for everything.

Also I always connect it to a large monitor + mechanical keyboard both at work and at home for any serious work... so not sure why you mentioned the neck/hand position.

Laptops with Thunderbolt 3/USB-C are pretty great for use with a docking station.

I get the advantage of having nice large monitors, proper clicky keyboards and mice, and the ability to charge/power the laptop - all over one cable. When I want to move away from the desk - unjack and keep going.

Why not? Lot's of FOSS development happens this way.

I do a lot of work related experiments at home for my own learning and to help what I do for my employer.

I work pretty much 100% on my desktop at home every day. Only reason I'm on a laptop right now is because of a power outage.

Man.. Loving this. Going to get the Ryzen 9 I think. My current machine is an 8 core i7 At 3.6Ghz and boost to 4.

Having 12 cores without the hyperthreading issues with intel and boost to 4.6 is going to rock.

i7-4790K myself, going to hold out a couple months longer to see if they get the 16core r9, or a new Threadripper out.

Finally! I’ve been using a Ryzen 1800x since it’s release. Unfortunately, it has some stability issues and I’ve been waiting to upgrade on the Ryzen 3000, 7nm line.

This is going to be a solid 75%+ boost to performance, given I regularly max out my machines threads. Pretty amazing improvement in 2 years.

Bad ram that doesn't perform to spec is the cause most of the time.

Ryzen runs them tight to spec, whereas intel is more relaxed. Here tight is better, but it also means exposing lies in manufacturer specs on ram.

Cheapening on ram is never a good idea, but there's always the option of configuring more relaxed timings on the firmware settings, if the ram isn't up to its advertised spec.

It's also possible your CPU is one of the first iteration of 1800x which had this issue that famously caused segfaults when compiling software on Linux. This was only seen on the first months of 1800x, and AMD offered free replacements to those affected. It's likely a bit late for that, but you're better off upgrading to Ryzen 3xxx anyway.

I started getting crashes recently and so I did a memtest64 run. A couple of errors... but only in one of the tests.

I backed the memory clocks down from 3200 (which it's supposed to be rated for) to 3000 and it passed with no errors.

Are you running correct voltage for 3200? Double check the spec! (Got bitten by this).


Could be that it'll still continue to have stability issues. For some reason Ryzens are extremely picky when it comes to memory. There's really no guarantee that 3-rd gen will resolve this issue. It's to the point where e.g. Corsair makes memory specifically designed to work with AMD CPUs. This memory typically contains Samsung B-die chips which work fairly well.

No, the stability issues they're talking about are on a release day 1800X. It's not memory issues, and I've never had memory issues with either of my Ryzen processors.

This is the issue they're certainly talking about: https://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Se...

And it is assuredly not a problem with any Ryzen processors manufactured after the very first few months.

Well, there are some stability issues with my ryzen 2700x. At first there were CPU freezes in idle on Linux: https://bugzilla.kernel.org/show_bug.cgi?id=196683.

After applying workarounds, I still see some strange crashes, not sure if at least some of those are still related to the CPU hangs from the bug above. TBF this might not be the CPU's fault. This is all quite annoying to me and time intensive to investigate (where do I even start?). Even though I really like AMD's tech I am quite frustrated and I haven't had these problems with my previous Intel builds so far...

There was also a problem with P-states and Ryzen CPUs crashing/stalling in idle, but only with Linux kernels.

I never experienced it myself but in my BIOS there's an option about "power on idle" that's suggested to not turn off for compatibility (I don't recall the correct words but I could check).

It usually depends on MB manufacturers and BIOS/AGESA versions.

So I must be imagining my problems with Threadripper memory kits as of early this year as well as my coworkers problems with his 1800X. Hmm, OK then. HN knows best.

I had troubles with my memory (4*16GB with a 2950X) until I realized it was configured in 1T command rate by default. Things appear to be rock stable in 2T. Anyway, yeah, memory is more tricky than it should on Ryzen.

Thanks for the tip. I didn't even know about this: https://www.reddit.com/r/Amd/comments/5ywq3s/ryzen_ddr4_comm.... There's a screenshot further down in the thread.

My desktop w/a Ryzen 1800x still randomly freezes. I disabled most of the power saving features in the UEFI, which definitely reduced the frequency but it still happens on rare occasions. :(

Sounds like a problem with enabled C6 CPU state. Check out the web for a particular setting in your motherboard's BIOS.

Mine does too, that is until I run

    sudo ZenStates-Linux/zenstates.py --c6-disable
(I've got 29 days of uptime right now and the last reboot was due to a system upgrade)

Now imagine a Threadripper with the Zen2 cores, higher IPC and frequency would be certainly welcome. Have the 32 core 2990WX and it's an incredible CPU for compiling large C++ programs, running big test suites and never having to worry about running too many tasks at the same time.

Conjecture on my part, but I wouldn't be surprised if we also see a 64 core Threadripper - there's going to be a 64 core Epyc:


Although they may save it for a "Zen2+" or something similar, like they did with 32 core Threadripper

I am specifically waiting for 64c Threadripper. Would be also great if 32GB UDIMM ECC became available by that time to bump up RAM from 128GB to 256GB. That computer could then last a decade.

I'm looking forward to dual socket 64 core epyc for at work (128 cores)

A lot of AAA game studios with larger C++ codebases use Incredibuild a lot. I can imagine having something with this level of parallelism would be incredibly useful.

Don't even need Incredibuild, even MSBuild or just plain /MP option in VC++ can take advantage of it. A build from VS of the Unreal Engine 4 client takes around 2 min, for example.

I mean, yes, but even that is not enough once the project is big enough. I work on a huge AAA game in C++ and on my 8-core 16-threaded Xeon the whole thing compiles in 40 minutes. Incredibuild is a must to keep the compilation times even remotely acceptable.

Agreed, utilizing multiple processes helps a lot, although a larger number of cores across the entire network helps a lot with compiling the mass of translation units. Most workplaces I’ve done C++ at have had extra IncrediBuild agents with Intel Xeons to help with that

Icecream is the distributed compiler to use on Linux. Add ccache as needed.

Hell, I have the older 16 core 1950X and it's amazing for compiling large codebases. I'd heartily recommend these things, performance for dollar is fantastic.

Having the 24-core 2970x and can comfirm, it is very nice for Rust development.

Nice. What build times are you seeing for clean builds of the rust toolchain itself? Curious to benchmark against my 2700x. I'd imagine near linear scaling with the core count.

I think the 3900x might be a happy middle ground. I'm guessing we would probably see (with the increased IPC, core count, and core clock) like 70-80% increases over a 2700x in these kinds of multithreaded workloads. So probably slightly more than half way to a 2970x or 2990wx?

3900x looks fantastic on paper. In general, if the Ryzen stuff is sufficient for your needs, it's a better value. You pay a big premium for the Threadripper boards (and big case and big cooling solution). So in that sense, the 3900x is definitely in a sweet spot at the top of the Ryzen range.

Tradeoffs: threadripper boards officially support ECC; Ryzen boards are hit or miss. TR boards tend to be priced around $300 whereas you can get a Ryzen board for $100ish. TR had (prior generations) twice the DRAM channels and way more PCIe lanes than Ryzen, so if you're doing GPU-intense work or something else with use for lots of PCIe, that's a plus. Not to mention, additional core count over Ryzen, although with greater inter-die latency. Not sure what that will look like with TR3.

Is 3900X worth $500 at list over $400 3800X at list? Actually, yeah, it looks at least 25% better to me (esp. the doubled L3) if you can use the cores. The 3800X is overpriced; they probably are learning from the 1700<->1800 dynamic in gen1. Is it worth it over the 3700X at $330? Maybe not.

For me the question is really, how long will Ryzen 3000 be on the market before those better IPC/clocks/core densities show up in TR3? PCIe 4.0 support is huge; AMD wasn't anemic on PCIe channels on Zen and Zen+, and PCIe 4.0 doubles bandwidth from 3.0. Hopefully those IPC gains do not come attached to Spectre/Meltdown-like vulnerabilities. I'm excited for Zen 3 TR! That might be worth an upgrade from the 1950X. Meanwhile, it doesn't seem like Intel will get to PCIe 4 until 2020 (although that's reasonably soon).

3800X is "gamer priced" :).

I think the 3900x is in a great position to provide the best of both gaming and productivity. Extremely aggressively priced at $500 for the horsepower it seems to give you.

I suspect there is going to be a 16 core 3950x later in the year. Maybe with slightly lower single core frequencies. But maybe 20-25% greater multicore performance.

I bet they are delaying that to keep something up their sleeves when Intel responds. And to not totally cannibalize TR prior to releasing TR3.

The 570 boards are going to be around $100-200 more expensive though. The PCB is a bit different and the specifications are more tight for PCIe 4. I think the cheapest board you'll see soon will be above $150 at the very low end all the way up to $600 or so. Many of the prior two gens will have issues running the newer CPUs and the board vendors are recommending against Zen2 on chipset boards prior to 570

Yeah - I wish I didn’t have to go threadripper to get ECC. I don’t need that much power.

It's not clear to me from TFA:

Do any existing AM4 mobos / chipsets have support for full PCIe 4.0 bandwith (64Gbps)?

Or will the existing mobos be limited to PCIe 3.0 (~5-6Gbps)?

  All of the five processors will
  be PCIe 4.0 enabled, and while
  they are being accompanied by
  the new X570 chipset launch,
  they still use the same AM4
  socket, meaning some AMD 300
  and 400- series motherboards can
  still be used.
I was just reading about PCIe 4.0 and 5.0 yesterday [0], and some quick research indicates only a week ago it was announced some current AMD boards do support PCIe 4.0 [1].

Would be awesome because the rate when transferring terabytes across SSD RAID arrays will see a 3-10x increase from ~500-600MBps to ~1.5-6GBps+. Fantastic!

[0] https://videocardz.com/review/pci-express-riser-extender-tes...

[1] https://www.pcgamesn.com/amd/400-series-pci-4-0-bandwidth-bi...

Existing 300 and 400 series boards may be able to operate at PCIe 4 speed for the CPU-provided lanes (as opposed to the ones routed through the chipset that you can't upgrade), however signal integrity issues may limit this to just the slot closest to the CPU. So far, I haven't heard about any particular boards that have been validated for a specific number of slots working at gen4 speeds. Whatever you're using for an SSD RAID array will probably get in the way of using gen4 speeds, since you likely won't be able to get gen4 speeds over any cables or risers without redriver chips.

It will be interesting to see whether they can match Intel in single-threaded performance across the board, and not just some carefully-selected benchmarks. This would be the first time since the Core2/Athlon64 days.

They already came close in a ton of benchmarks and were faster in others. The biggest issue was AVX. They could run AVX2, but at (effectively) 1/2 clockspeed due to their implementation being 128-bits wide instead of 256-bits like Intel's (which downclocked, but was still faster for non-mixed loads).

AMD now has 256-bit AVX2 units, but unlike Intel, they don't need to downclock due to 7nm TSMC's lower power requirements compared to Intel's 14nm process. This should also affect 128-bit AVX instructions. It should be possible to reorder and push 2 through the pipeline at the same time in a lot of circumstances.

> It should be possible to reorder and push 2 through the pipeline at the same time in a lot of circumstances.

To be clear, the existing chips had two 128-bit vector units, and the new ones have two 256-bit vector units. So that would get you 4 total.

Also each unit, at least on existing chips, is capable of either a single FMA or a completely independent multiply and add at the same time. I don't think Intel chips can do this?

I'm begging for a good single threaded CPU that isn't $600

AIUI the 3900x delivers at just $500.

I am hopeful for the actual third party benchmarks.

Read that often here tonight. Is there a killer app for single core performance other than UI/UX?

Literally everything other than the rare, very specific workloads that are amenable to parallel processing! (This isn't hyperbole.)

There's nothing "rare" and "very specific" about parallel processing, what's "rare, very specific" is the amount of software that's been rewritten/redesigned to take advantage of it so far. Sure, there are inherently-serial workloads, but most of what we use our machines for isn't like that. Parallel processing is not just about performance either but also general stability, a many-core processor can run a lot cooler and be a lot less fiddly than a high-end CPU core that packs the same amount of compute performance in a single CPU thread!

>There's nothing "rare" and "very specific" about parallel processing, what's "rare, very specific" is the amount of software that's been rewritten/redesigned to take advantage of it so far.

People have been banging their heads against the 'rewrite this software to take advantage of multiple cores' wall for decades. The lack of progress is telling. For a straightforward example, look at the second half of this Factorio update blog: https://www.factorio.com/blog/post/fff-215

Factorio is a sim game that you would think on first consideration would hugely benefit from a multithreaded design. It turns out that doing so is actually slower(!). And although this example is pulled from a game, it is essentially the same story again and again, no matter what the subject.

Now consider the second half of your statement- that the main benefit of multi-core processing is that it provides more CPUs, so that if any one gets choked, the general environment continues to operate.

(Which is true, and a great advantage of having a multi-core CPU.)

But consider a little deeper, too. If the first, best defense we have regarding multi-core designs is that they are simply more single-cores to have on hand, what does that say about the relative value of parallel processing vs. single-thread performance? Inherently serial workloads dominate across the board, in every field. The few parallel problems we have, we have because people have put a lot of brain sweat in to figuring out what, exactly, we can even do with all these cores lying around.

Meanwhile, there are entire classes of problems that are simply waiting for better single-thread performance before we can move ahead.

This is a very real problem, and it isn't going away.

There aren't that many benefits for super high core counts in your average enthusiasts use cases. Even games don't benefit from higher core count as much as you would expect.

The people who would benefit from more cores have the server specific lines of CPUs to choose from, so that makes consumer grade CPUs a compromise between core count and single core performance.

> Even games don't benefit from higher core count as much as you would expect.

And some games (like some Source-engine titles) crash if you have a high core count.

I certainly benefit from the higher core count because I usually have a VM, 40 browser tabs, Slack, and a bunch of other stuff open at any given time, but my parents would see no benefit with their 5 tabs + iTunes + Word usage.

In all seriousness, I've been wanting better single threaded performance for running a Minecraft server.

What kind of player count are you getting?

I have been running 4 players with one of the most taxing modpacks on a mid tier digital ocean VPS with no hitches. Not many players I guess but in case you were curious if you could use a VPS. Even when we had multiple excavators sending thousands of entities through sorting pipelines it was stil doing surprisingly well.

Thanks for the suggestion. I'll look into whether a VPS is viable, but we often have poor internet at our LAN parties and get the best experience when the server is local.

It's a vanilla server and I also get around 4 players. The real problem occurs when people are generating new terrain while flying on an elytra, sometimes causing the server to crash altogether. When not exploring, it will frequently report "Can't keep up!" messages even when hanging around spawn, which I think might be due to the truly insane amount of hoppers we have (although haven't seen this as much in the recent update).

If you're curious, the CPU is a i5-3570K @ 3.40GHz. The game is certainly playable, but it struggles under load like I described.

A max size reactor in minecraft consists of 50000 Tileentities and it only produces a few million RF/t enough to power a handful of max tier void miners. Thousands isn't exactly impressive.

Well we had a max size reactor powering our excavators so I guess we had those entities too. I didn't realise it took so many entities to run. Not to mention the hundreds of other pieces in our worlds automation puzzle and it was all spread out quite far apart, with chunk loaders maintaining the networks presence. Things were routed, crushed, smelted, crafted and eventually stored or utilized, all automatically.

Intel just announced the i9-9900KS which has 8 cores all 5GHz. That should be top single core perf.

Most Flight simulators or any physics heavy game gains a lot from single-threaded performance

Yes, music production. If you want to run plugins in realtime you need fast single core for long plugin chains.

Games are still bottle necked by single thread performance

Last two Assassin's Creed engine benefit from more than 4 threads

Single threaded performance is irrelevant at this point

On the contrary, it is absolutely crucial for some of us (yes, even for developers). I work with an interactively compiled language (Clojure and ClojureScript) and while I rarely do full "builds", I care a lot about recompiling single files and reloading them in the browser. That is not a multi-threaded job and requires single-threaded performance.

This is why when looking at iMacs, I'd rather get the iMac than the iMac Pro. Multiple cores just aren't as important to me as is single-core performance.

The iMac Pro's Xeon CPUs have pretty good turbo boost frequencies. If your machine isn't pegged on all cores when you want to do the task specified in your post then you should have zero problems.

I watched the keynote live.

I'm sold, my desktop is most likely going to be a Ryzen (although not the 8/16 monster, come on, it's a desktop, if I need high core count, I have stuff at work for that).

The "monster" is the 12/24 and that's still only 105W. For a real monster you'd need a Threadripper. The cheaper 8/16 at 65W/329$ looks like a great choice. But I guess 6/12 for 130$ less is also nice.

I don't mind 12 cores / 24 threads on my desktop. Makes compiling Mesa, Wine and Linux kernel a lot faster.

I do a bunch of live video stuff (which, for the uninitiated, means "hi, I'm ffmpeg, I want all your processors") and I learned, the hard way, that I couldn't put a 16C/32T Threadripper in the box--it's a 4U case in a Gator transport case--because 180W would blow out my heat budget. 12C/24T at 105W should be doable. I'm excited; right now I'm constrained because stuff like my CG system is built on Chromium and so it gets a little chewy when I run multiple independent overlays. Doubling my cores--I have a Ryzen 1600 in there right now--and probably increasing single-thread performance by 40% should make my life pretty interesting.

Yeah, video encoding will surely benefit from it as well.

Back of the matchbook, I should be able to encode 1080p60 at something like medium presets.

"Medium" is really good.

Cool, but we haven't seen compiling benchmarks yet. 12 cores might be bottlenecked by dual-channel memory.

Compilation workloads are not really bandwidth sensitive at all. The existing quad-channel 32 core 2990WX compiles the Linux kernel faster than the 8 channel 32 core EPYC 7601 thanks to its higher clock speeds in spite of having half the memory channels. So I think it is safe to say that memory bandwidth is a total non-issue for this type of workload.

Benchmarks here:



The AMD Threadripper 2990WX has 32 cores with quad-channel memory, and it seems to do fine.

Would be good to see some benchmarks.

agree. Hopefully the seventy megabytes of cache between the two chiplets will help...

Biggest surprise seems to be the 65W 8/16 3700X. Hopefully that means a bit of overclocking headroom.

Considering that the 105W 8/16 3800X only gains 300MHz base and 100MHz turbo for the extra 40W, I'm not sure that's the case. Still almost certainly what I'll be replacing my 4790k with though

Going to wait a couple more months to see if 16c 39xx shows up, or a new Threadripper... also on a 4790k.

Yeah, I think this might finally be the reason to replace my old Haswell.

This might just be a case of TDPs being essentially arbitrary. The 9900K is allegedly a 95W part, but even a moderate all-core OC means you have powers in the 200W and up range. AMD might have just jumped on the same bandwagon.

AMD's TDPs are calculated at all-core turbo. Intel's are only valid for base frequency.

Ah, so AMD doesn't have cheesy numbers- good to know.

Not entirely true, some AMD processors too can draw more power than the TDP value would suggest, due to XFR.

Agreed but the effect is much more limited. If you compare manufacturer TDP against each other, it can only go even more in AMD's favor in real world scenario.

Not very surprising even only from the manufacturing process used difference though, since Intel's next desktop generation is still 14nm it won't be changing soon.

What's the outlook for AMD to provide the kind of performance counters and performance counter accuracy that is needed for rr?

For those curious like me: https://rr-project.org/

It's a debugger from Mozilla that works by recording and replaying program executions.

My question is how is their architecture doing regarding the assorted speculative execution baked-in issues and what kind of impact is there on the AMD processors compared to comparable Intel CPUs?

Those affect Intel much, much more than AMD and once you include the fixes it always get better for AMD than without because, oversimplifying a lot, when doing branch speculation "Is it branch A or B ? I will guess A and pre compute that" Intel didn't bother with any access check during the prediction and only ran them after once confirmed it was indeed A (so it already had the result and merely did the ACL check), whereas AMD was already doing all the checks even during predition.

That's why there is an entire range of such weakness that doesn't affect AMD while it hurts Intel very strongly.

Phoronix is the number one site for that sort of article.


Frightening lack of press around this. Basically all of Intel's and AMD's lineups are centered around high thread count right now. Completely useless if I can't reasonably enable SMT.

AMD's chips apparently don't have the security vulnerabilities that make SMT unsafe to enable on Intel chips - they released a white paper explaining why MDS etc aren't possible and what exactly the boundaries are on their chips: https://www.amd.com/system/files/documents/security-whitepap... It just didn't get a huge amount of press coverage.

So far, AMD has only been exposed to the Spectre (branch cache) information leaks haven’t they? They aren’t vulnerable to Meltdown or any of the recent MDS information leaks because they don’t speculatively execute if the thread doesn’t have access rights. Intel chose to speculate regardless and fixup the register contents on instruction retirement, which is why they have had so many problems.

I didn’t see any word on whether the vector units are still half width. That’s still a major performance advantage for Intel.

They announced they doubled the floating point, so I assume that means full width per clock.

Hear it here.


Ah cool! Hopefully the BMI2 instructions are upgraded too.

They are full 256 bit wide now, that's how it can claim a 2x float perf. improvement. It also doesn't have an "AVX" offset.

Takes guts to stick with that core count and at least you get to enjoy the full 70MB of cache. Good thing Blender and Cinebench all fits inside that, not sure you can ever say the same for productivity workloads.

I guess AM4 also means no real improvements on the PCIe lane count: Would love to see real and IF switches to give a bit of flexibility and what they plan for a new Threadripper.

> I guess AM4 also means no real improvements on the PCIe lane count

The new Ryzen 3000 CPUs support PCIe Gen4.. so while the number of lanes will remain the same, their bandwidth could be doubled. Just announced Navi GPUs also support Gen4.

I am quite surprised that they're launching a consumer CPU with 70MB of cache! When most CPUs have ~8MB, I don't think there's ever been such a huge CPU cache jump in history.

Those duel CCXs means the 16 core Ryzen could be ready for release (that's what I have my eye on!). It's funny, going from 12 to 16 cores is basically the i7 6700k in my desktop.

Plus, it's great that Intel is being aggressive by releasing the 9900KS (which is a pretty good GPU for gamers). It's been a while since we've seen any real competition between AMD & Intel.

Looks like that 499 will include a cooler too. Id probably not use it (liquid cooling is better imho) but an included cooler is a perk.

I absolutely love that AMD includes solid coolers with all of the Ryzen chips. I run my R5 1600 with the stock cooler, and, though it runs hotter than it would on liquid or a top tier aftermarket cooler, I've still never felt the need to worry about temps. Plus it's relatively quiet.

I think the cooler is a forgotten value add over Intel chips, since any chip you buy from them, whether it comes with a stock cooler or not, is going to require an aftermarket cooler in order to get decent temps/noise levels.

Same on my 2700X, it's got a nice industrial design for a stock (if that is you thing) as well.

I wish they had a slightly cheaper option without a cooler. I already have a good air cooler from Noctua.

Ditto. I end up not buying their X chips because I don't want to pay for another high end cooler that I'm not going to use.

In my country stores are selling OEM Ryzen CPUs for the same price as BOX versions.

It is amazingly cheap (compared to the intel's 1.1K), wonder what could be the reason behind it.

They probably have something secret to counter intel in the 1k range. Probably 128 cores spread across a square-foot of silicon. Enough output that you can boil your coffee while gaming.

Or bake a pizza, some cookies. Maybe they’ll offer a cast iron skillet version at some point? Play crysis AND get a good sear on dinner.

Isn't that what AMD GPUs are for? They'd be eating their own market share for PCs that double as frying pans/ovens!

If you're basing that comment on TDP, think again.

NVIDIA redefined TDP to their convenience, to mean something more like "averages". So their numbers can't be compared directly.

Always look at third party measurements. I'm looking forward to Navi's, while on the topic, as they've announced large improvements in power efficiency.

In response to what you said about Navi, I'm also excited about that. I'd really love to see some really strong competition to Nvidia's offerings that might be able to give them a good fight.

Their dominance and tendency to push for proprietary features seems quite bad for the industry as a whole.

Yeah, I know. AMD GPUs also run a lot cooler as of the last time I checked. They've made a lot of progress, although the jokes from the R9 2xx-3xx era of a few years ago are still tempting.

If your use case is to fry some meat, I believe Intel still holds the crown.

I'm expecting some new, crazy high binned, extremely expensive Intel CPU with sky-high TDP to be announced soon.

You can do a reasonable drip coffee off 300 watts, so that future is already here!

I imagine their manufacturing costs are far lower than Intel's equivalent chip - the 3900X is essentially one and a half of a 3600X (since it has two CPU dies, and one interconnect), while Intel is still in the monolith business.

A modular approach! Kudos to AMD for making it work nicely (anyone remember the first Pentium Ds, that were just two Pentium 4s clobbered together to make a dual-core chip?).

Apart from that, I think you meant two-and-a-half, not one?

Nope, one-and-a-half! Two of the CPU dies, one interconnect die - that's going to be somewhere between one and two!

And their interconnect tech seems like it's going to be of huge importance in the server space - I can only imagine the yields on 8x4 core modules will be far higher than on a monolithic 32 core chip.

> I can only imagine the yields on 8x4 core modules will be far higher than on a monolithic 32 core chip.

The next EPYC's going to be up to 8 chiplets x 8 cores, competing against Intel's current 28-core monolithic die (or their dual-die non-socketed 56-core). How many cores remain enabled on a hypothetical next-generation Threadripper is an open question, but they would probably go beyond 32 cores total. And a 32-core Threadripper would probably not have 8 chiplets but rather four fully-enabled active chiplets and four mechanical spacers.

Intel is just woefully overpriced?

I'd say it's more relevant for something like the 3700X - air cooling is very reasonable for a 65W TDP chip.

You can get practically silent air coolers for ~$40 now too. The only reason I would do liquid cooling now is for the fun of putting one together.

Isn't AIO liquid cooling still a good solution for decent overclocks without emptying your wallet for a custom loop?

To my knowledge, those $40 silent air coolers perform well but still can't match a big liquid radiator.

> To my knowledge, those $40 silent air coolers perform well but still can't match a big liquid radiator.

The best air coolers can match AIOs. Specifically Noctua's air coolers can go toe to toe with most commercial AIOs, with one fewer point of failure (pump). At least according to Gamer's Nexus and similar sites.

The biggest arguments for AIOs is:

- Appearance/aesthetics

- Space around the CPU block (air coolers in this league are HUGE)

- Improved small form factor build flexibility

In terms of raw performance, only custom loops really challenge high end air cooling.

With air cooling you typically save some money (even a high end Noctua is often 40% or more cheaper than a branded AIO). No leak issues. Fewer points of failure.

I can't find the link but a youtuber tested various AOIs and air coolers and a Noctua air cooler did the best job cooling. Unless you like AOIs for some other reason (e.g. some people don't like all the space air coolers take), if you aren't running a custom loop you're probably wasting your money when water cooling.

Add to that the extra hassle and maintenance you have to invest in water-cooling (plus the danger of leaks!), air cooling really starts to shine.

Was it the following video by Optimum Tech?


I recently purchased an NH-U12A and can confirm it is excellent.

Linus Tech Tips recently published a video called "why you shouldn't water cool your PC"; they found that a Noctua NH-U12A ran cooler and quieter than an AIO liquid cooler.


Interesting. Do you have any idea as to which Noctua cooler it was? Their highest end ones run really close in price to the lower end AIOs.

NH-U12A. And yeah, it's fairly expensive for an air cooler.

Thanks! Been a couple years since I built a new PC, and it's helpful to have that info for anything I might build soonish.

If you don't know about it, pcpartpicker.com is a fantastic resource for building a new system.

It makes researching and comparing a bunch of brands and parts as well as waiting for the best times to buy a breeze.

Yeah, I know about PCPP, used it to research/design my last build. The reason I was asking sbov questions is that I've kinda fallen off the wagon in terms of keeping up with hardware news since I did my last build.

Thanks for the recommendation, though; it really is an awesome resource.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact