Hacker News new | comments | show | ask | jobs | submit login
Marrying Vega and Zen: The AMD Ryzen 5 2400G Review (anandtech.com)
153 points by pella 7 months ago | hide | past | web | favorite | 74 comments



tl;dr: "With the Ryzen 5 2400G, AMD has completely shut down the sub-$100 graphics card market. As a choice for gamers on a budget, those building systems in the region of $500, it becomes the processor to pick."


> With the Ryzen 5 2400G, AMD has completely shut down the sub-$100 graphics card market.

Well, almost. Integrated graphics generally take the place of additional CPU cores. For example, there is no 8-core Ryzen with integrated graphics. So if you need cores (like for compiling code) then you end up with a discrete graphics card even if you don't need fast graphics.

I actually wish Intel would make a cheap discrete graphics card for that purpose, because they have the most stable Linux graphics drivers.


To be fair, AMD's Linux drivers have been pretty solid for a while now. I don't have an objective comparison, but the general perception definitely has some catching up to do. For those Ryzens you'll probably need a very recent kernel, but that's to be expected with new hardware.


It depends. We got a few machines with AMD RX460s (Or similar, I can't remember exactly) and the users wanted Ubuntu on them.

So I install 16.04, the most recent LTS release available because I don't want these machines to turn into a pumpkin after less than 9 months but the graphics are a total fail. SVGA 800x600 only.

So the kernel on that release is too old and updating the kernel on Ubuntu is fraught with peril. So I look into third party drivers and find that their support was dropped entirely from even the third party Ubuntu support. So I go to ATI/AMD's website and find the driver package, but it turns out to be a tarball full of .debs with zero instructions on how to install or enable them. I was having flashbacks to working on Linux installs in the late 90s and having to hack everything to get it working.

In the end I had to install 17.10 and apply the hacks to make it work[1] just to get the graphics running, and now I'm going to have to revisit those machines in a couple of months to upgrade them to 18.04.

[1] Disabling the new systemd DNS resolver that crashes on literally every lookup attempt on our network. Also had to learn how to program the modeline equivalent for Weyland for when you're going through a KVM and don't have EDID. This was no picnic in later versions of X, but I had already figured it out and had earlier versions as reference. I still miss xconfig sometimes.


This is solved by rolling the hardware enablement stack forward on 16.04. https://wiki.ubuntu.com/Kernel/LTSEnablementStack


Yes, using the latest hardware is painful with Ubuntu.

It's gonna be a breeze in a few month though. And once that happens, the open source AMD drivers are going to be stabler than the Nvidia blob again.


Given that the nVidia blob apparently doesn't work with Weyland it's going to be a problem in the near future.


> So I go to ATI/AMD's website and find the driver package, but it turns out to be a tarball full of .debs with zero instructions on how to install or enable them.

You should have read the friendly documentation! I have AMDGPU running on 16.04 after running an install shell script (amdgpu-pro-install) in the tarball, as spelt out by the docs.


To bad you didn't wait for Spectre. Because of Spectre they bumped the 16.04 HWE kernel to 4.13. :)


Especially with Kernel 4.15, AMDGPU got HDMI audio out.


These days I have far more issues with Intel iGPUs than my desktop's RX480.


Quad cores are still sufficient for games. Some games benefit from more, especially if you're also streaming, but not by that much.


The not-really-better nvidia 1030 they're comparing it against is ~$100, making the APU a much better choice indeed.


I think it would be amazing if there were also able to stick a GB or two of HBM onto the same chip as kind of a L4 cache / VRAM for the GPU. Performance would be amazing, especially considering that Zen's performance gets better with RAM speed and GPUs are almost entirely RAM starved.


Rumor is that an interposer alone is $20, and HBM is also rather expensive. Basically, you'd easily double the cost of these $99 chips if you put HBM2 onto them... maybe triple the costs if you consider that AMD should charge a margin and actually make a profit from these designs.

After that, the GPU would be too-strong on memory and too weak on compute. The minimum HBM speed 1024-bits / 250GB/s from one stack. Yeah, the same memory-bandwidth as a RX 580 with 2304 shaders, while the APU 2400G only has 704 shaders.

The Intel + AMD chip manages to solve the problem: the CPU, GPU, and HBM2 are all on the interposer. So the expensive interposer has a shared design. Second, the GPU is a fully dedicated Vega design that's roughly the size of an Rx 580. That way, the dedicated-GPU can "keep up" with the super-fast HBM2 RAM.


It's also not using the AMD-style interposer but Intel's EMIB which they claim is much cheaper.


I've never considered that as an idea before, but now am left wondering why it hasn't been done. Guessing the manufacturers make more money from full-unit upgrades than they'd expect to make from incremental memory unit purchases (even if they made the RAM slot proprietary).


Intel has a new chip coming out with AMD graphics and HBM memory: https://www.anandtech.com/show/12220/how-to-make-8th-gen-mor...


But that’s not a desktop CPU. I would gladly buy it if it were.


With a 100W TDP it basically is a desktop CPU.


There was a time where you could put an extra DDR2/3 DIMM on some AMD motherboards to complement their chipset based graphics.

http://www.tomshardware.com/reviews/790gx-graphics-sideport,...


HBM is fairly new so it is probably a cost issue for the target market.


Pending good motherboard support for 2160p60 (4K 60Hz), the $169 CPU looks like a great choice for a 4K home theatre PC. Just a few months ago I built a Ryzen 1500X based HTPC with a geforce 1030 passively cooled card.

The geforce1030, based on a slower version but the same architecture as the 1050/1070/1080, gets me hardware accelerated H265/HEVC and the capability to play most MKV files without dropping frames at up to very high bitrates. Typical playback for something like a 2160p60 24GB copy of the original Blade Runner is at about 18% CPU usage on all four cores using a recent VLC beta.

But the total money spent was something like $89 motherboard + $89 video card + $165 CPU. It would be nice to not have to buy the video card.


Be aware for 4K that almost no motherboards exist at this time that can pair one of these APUs with a HDMI 2.0 output. Almost all AM4 motherboards come with HDMI 1.4 outputs that limit 4K output to 30Hz - probably fine for movies, noticable in the UI, and not great for games (even if rendered at lower than native res). Though there are some current gen AM4 boards with HDMI 2.0 and native 4K60 output, it's somehow a premium feature that is bizarrely reserved for high end gaming motherboards.

The next generation of motherboard chipsets from AMD should remediate this, but motherboards with these will not launch before April.


Yes, so the motherboards are now the main problem... in 2018 I certainly wouldn't build anything that's only capable of 30Hz. Would be a total waste of money. My current setup has a pretty basic B250 chipset microATX motherboard (no wifi, etc) which is fine because the design was with a dedicated geforce1030.

I'm sure that by April there will be similarly priced motherboards that do support 2160p60 out on their built in HDMI port, for use with these new CPUs.


For those informing purchase decisions on this, it seems I was wrong [1]. Apparently the APU dictates the HDMI version, not the motherboard. Many AM4 motherboards appear to support HDMI 2.0 with one of these new APIs.

[1]: https://smallformfactor.net/forum/threads/raven-ridge-hdmi-2...


I'm a bit fuzzy on this, but I think any motherboard with DisplayPort 1.2 or better will do the job. DisplayPort has been ahead of HDMI on the 4K60Hz game.

Picking a random NewEgg motherboard with AM4 & DisplayPort, it looks like it would do the job (see Specs):

https://www.newegg.com/Product/Product.aspx?Item=N82E1681313...

So does the cheapest board:

https://www.newegg.com/Product/Product.aspx?Item=N82E1681314...


I did some research on this before building my current system (see parent comment). At the time I bought the stuff about two months ago the A8 and A10 CPUs (pre-Ryzen generation) for AM4 looked like terrible options. Looking at CPUs, the whole series of currently shipping Ryzen don't have onboard video, and I wanted a 1500X because I knew I could overclock it by at least 200-300 MHz with little effort.

Almost went to an Intel 8th generation Core architecture and Intel's integrated video. But the performance difference between intel onboard and a $80 to $90 geforce1030 was significant. I wanted to build something that would still be able to play ridiculously high bitrate HEVC.


I was going to "I sort of wish I could get a Thinkpad or Latitude with one of these in it" then I googles and realized that both will be coming out this year.


Remember that this is the desktop APU, with 65W TDP. On the article they mentioned that AMD doesn't have any plan on the laptop APU.

Edit: oops, they released the laptop APU earlier, like ryzen 7 2700U.


This is the desktop (AM4) version of the same Raven Ridge silicon as in the existing Ryzen Mobile product. Laptops are already shipping with the mobile TDP APUs since a little while ago.


The Ryzen 7 2700U and Ryzen 5 2500U have integrated Vega GPUs.


cTDP down to 46W sounds about OK for a "workstation"-type laptop.

Unclear whether it will be BGA-packaged though.


I can imagine seeing it on a Thinkpad P model.


Yeah, laptops will have the equivalent lower power chip in the 15-25W range and 2 GHz instead of 3.5. Doesn't change my feelings. :)


They already do. The laptop versions were released a couple of months ago.


There are cheap netbooks out now with Raven Ridge chips but I'm hoping for something higher end. With a TrackPoint or the equivalent.


I have been waiting for this for a new HTPC built that can also do a good job as a gaming system. Now comes the fun of reading each motherboard spec in detail to figure out if it can output 4K@60 (HDMI2.0?) By fun, I mean pain as this info is hard to find on the boards I have looked at so far.


Raven Ridge supports 4k60 via HDMI 2.0b. However, no current mobos do. New mobos with updated chipsets should and those are due out in April with the Ryzen refresh.


It looks like even though all motherboards are only officially HDMI 1.4, there are reports of 4k60 4:4:4 working just fine! YMMV, but this is good news nonetheless:

https://www.reddit.com/r/Amd/comments/7xd636/raven_ridge_hdm...


I would have hoped that this wouldn't be an issue but looked at specs on a few itx boards and they are only listing HDMI 1.4. Major bummer if that is true.


For 4K@60 or 1440p@144, you would need a motherboard with 'Display Port' not just HDMI. That should eliminate quite a few.


Wrong.

https://www.hdmi.org/press/press_release.aspx?prid=133

> September 4, 2013 – IFA 2013 – HDMI Forum, Inc., a non-profit, mutual benefit corporation, today announced the release of Version 2.0 of the HDMI Specification. This latest HDMI Specification, the first to be developed by the HDMI Forum, offers a significant increase in bandwidth (up to 18Gbps) to support new features such as 4K@50/60 (2160p),


Ah thanks for the check. I would still recommend getting a motherboard with Display Port(DP) to future proof the core components. I bought a gaming monitor 2 months back, and it came with only the DP cable. I see DP recommended by most game rig builders.


HDMI 2.0 does 4k@60.


I really like that AnandTech's print view is just the same layout but everything on a single page.


This is such a breath of fresh air for PC gamers who have been waiting to build a budget gaming machine, ever since the crypto-mining frenzy messed up the GPU market. Most GPUs are selling at a 30%ish price hike right now!


PC building is a nightmare right now between RAM and GPU prices. It's clear that they are not a valued demographic to parts manufacturers.

Anyway, I would recommend a Ryzen 3 + GTX 1050 at a minimum for a gaming computer. The performance here on GTA V, a several years old game, is pretty poor.


Actually, they are. Arguably it's the cryptominer fools that the part manufacturers want nothing to do with, because chances are you ramp up manufacturing now (so 3-6 months until you actually see higher supply) and by then they are on to the next coin fad.


Most GPUs that are 2GB or less have been almost unaffected by the price hikes (geforce 1030/1050/1050Ti) because you need more than 2GB of RAM to mine Ethereum these days. It's the 4GB and higher RAM cards that have crazy prices.


What are the chances that AMD will ever try to shut down the $200-$250 graphics card market?


If they can socket a Vega 16+ APU, they might be able to put a dent in the sub-$200 market. The Intel G platform uses Vega20 and Vega24 with HBM, but they aren't socketed.


Not gonna happen with DDR4. Clock for clock, the Vega8 and Vega11 go toe to toe when OC'd, pointing to bandwidth starvation.

Maybe with a bigger package they could throw HBM2 on there... Something like a Ryzen 7 2 CCX configuration (8c/16t) + Vega ~20 + 4GB HBM2 in a package that fits the Threadripper/Epyc socket.

At that point you're stuck paying a premium for the X399 platform, though, you should probably just buy a dGPU :)


There's already a huge amount of free space in Threadripper - the active processors only take up half the space.

The biggest issue will be heat dissipation - but most games are GPU bottlenecked (especially beyond 8C/16T) and games generally aren't designed to consider the Threadripper module structure anyway.

You could probably add the GPU and a couple of 8GB stacks of HBM to the two empty slots, then throttle one of the CPU modules when the GPU ramps up.

I'd recommend a water block.


Would u recommend over clocking memory for the best perf increase?


RAM speed is definitely important for CPU and GPU perf, but much like other Ryzen parts it hits a point where higher end kits provide little to no benefit. 3200MHz is about where you start hitting seriously diminishing returns, especially considering cost of high end RAM kits.

You can see a much larger gaming benefit by overclocking the IGP. In most tests I saw today, an overclocked Vega8(1100MHz) in the $99 part easily catches up to the stock Vega11(1250MHz) and can be pushed to ~1500MHz.

https://www.kitguru.net/wp-content/uploads/2018/02/xRaven-3-...

https://tpucdn.com/reviews/AMD/Ryzen_5_2400G_Vega_11/images/...


Are there any Dell laptops with Ryzen APUs?


Not that I know of so far, still only sell AMD A-XX laptops. Also looking forward to an AMD Ryzen Mobile (U) that can replace a XPS 13, since Intel is not an option.


No Dell laptops so far. Just these three:

ACER Swift 3 with Ryzen 5 2500U Vega8, 4GB

Lenovo IdeaPad 720S with Ryzen 7 2700U, Vega10, 8GB

HP Envy X360 with Ryzen 5 2500U Vega8, 16GB

There is an all-in-one

DELL Inspiron 7775 AIO, Ryzen 7 1700 CPU + RX 580 GPU, 16GB


I see. Any idea when they are coming though?


The one news I'm hoping to hear from AMD is that they produce a deep learning TPU.


I'm sorta surprised that AMD hasn't made a "smallest Zen with double the biggest Vega" to appeal to the cryptocoin crowd.


BTC is ASIC based now. No GPU will ever compete in the BTC realm.

Etherium is Dagger-Hashimoto, which tries to be RAM-bandwidth bound. GDDR5 and HBM2, which can achieve RAM-bandwidths of 500+ GB/s (compared to a typical CPU+ Dual-channel DDR4 with only 50GB/s), so Etherium's best machine are the cheap GPUs with GDDR5. HBM2 (Vega64 and Titan V) are faster, but I don't remember if they win in price/performance. HBM2 might win in ongoing costs due to being more power-efficient.

Monero is Cryptonight, which tries to be memory-latency bound (Ideally: executing out of L3 cache of CPUs). Unfortunately, it seems like the miners have figured out how to parallelize it, as HBM2-based cards manage to perform incredibly well in Cryptonight.

Basically, the only cryptocoin that MIGHT be worthwhile on an APU connected to DDR4 is Monero / Cryptonight. Even then, because Cryptonight is primarily memory-latency bound, I doubt that the GPU-portion would be any faster than the CPU. The majority of the algorithm is waiting for L3 cache to respond.

The L2 Cache in GCN (and probably Vega) is 512kb and shared between 4 compute units. So its too small to execute Cryptonight. As such, why execute from the on-board GPU when the CPU already maxes out the L3 cache?

Cryptonight requires 2MB of low-latency access per instance. Threadripper with 16 cores and 32MB of L3 is basically the most ideal CPU setup. But Vega64 managed to outperform Threadripper by significant margins, probably because HBM2 RAM is incredible (and there seems to be some kind of unexpected parallelization trick in the Cryptonight algorithm)


>> Basically, the only cryptocoin that MIGHT be worthwhile on an APU connected to DDR4 is Monero / Cryptonight. Even then, because Cryptonight is primarily memory-latency bound, I doubt that the GPU-portion would be any faster than the CPU. The majority of the algorithm is waiting for L3 cache to respond.

Good guess. The CPU is nearly twice as fast as APU, and limited by L3 cache as you correctly noted.

http://www.legitreviews.com/amd-ryzen-5-2400g-mining-perform...


Thanks. Its always good to see real benchmarks.

One "qualm" is that AMD's APU is a very interesting platform, which can share pointers and memory between the CPU and GPU. Some degree of caching is shared between the CPU and GPU since both are on the same die!

Its not a big deal for cryptonight (which basically pushes AES-NI instructions: so its fastest on the CPU anyway. And L3 is so small that only 2-instances can run on the Ryzen 5 2400G). But in the general case (say: maybe Ethash, which isn't as latency bound / L3 bound), there may be custom code that can be written to take advantage of AMD's HSA platform.

Code which doesn't exist yet (and therefore can't be benchmarked). In part, the ROCm project doesn't even have Raven Ridge support yet, so HSA coding / CPU/GPU sharing can't really take place yet. But looking forward: it might be an interesting project to see EtHash implemented as HSA / ROCm code.


I don't see right now how that would have more appeal than dedicated GPUs to them? Is there a market for "normal PC with small mining-capacity on the side" vs fairly dedicated rigs? I'd think nearly nobody is buying <$200 GPUs for mining?


i really doubt it. in a mining rig all power spent on components other than the GPU is essentially pure overhead that cuts into the thin margin of a consumer setup. you need several discrete GPUs before it starts to be worthwhile at all.


I think everyone is expecting mining to fall and they don't want to invest the R&D and create the manufacturing , build the hardware and in the end it won't sell. Mining seems like a gamble so I would personally not risk it, if they want the miners can buy the regular GPUs


I recently (yesterday) calculated[0] that Ethereum is roughly 64% of the GPU mining market by power. That's a lower bound - it's likely much higher, maybe 75+%. Hybrid POS/POW is expected in the fall and will likely come with a healthy drop in the block reward. Following that will be full POS (2019?) after which, at least on the main Ethereum chain, there will be no mining. I'd like to see the block reward dropped much sooner but that's a long shot request.

[0]: https://www.reddit.com/r/ethereum/comments/7wufiq/ethereum_i...


It sounds like Samsung is making something like that, based on vague announcement on their earnings call. They've been working on a new GPU for several years now, making a miner from it may help them validate the design while they're working on the drivers for general purpose & gaming usage.


What's the cryptocurrency mining efficiency vs price ratio like? Is this going to be the new Rx 580 in terms of popularity?


rubbish for crypto but good value for games hopefully


Why can't we just have Intel or AMD design the ultimate mining chip that works better with more of that one purpose chip :)


Or a free socket, or maybe an nvme mining chip?




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: