Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen 5 8500G: A surprisingly fascinating sub-$200 CPU (phoronix.com)
171 points by mfiguiere 7 months ago | hide | past | favorite | 160 comments



You'd think after AMD shipped Xbox One, XboxX, PS4, and PS5 with decent iGPUs and a decent memory bandwidth they might bless one of their APUs with > 128 bit wide memory. Seems an obvious bottleneck that would improve GPU performance.

It's coming with the Halo Strix with it's 256 bit wide memory, but should it really have taken over a decade?


Jumping to a wider memory bus is most feasible in product segments that already use soldered CPUs and RAM. PCs have been reluctant to drop the sockets/slots until recently, and only then in thin and light laptops. In systems that don't have discrete GPUs, Intel hasn't been applying competitive pressure against AMD's integrated GPUs.

So it's really been a question of whether AMD could spare the cash to tape out a larger laptop chip with the goal of cutting off the low end of NVIDIA's dGPU sales for laptops.


This is exactly the reason - and perhaps they finally can do it because "Apple showed the way" with the M-series.

They obviously could have done it from a technical standpoint, as they had done it for consoles, but those are not general purpose from the viewpoint of who is laying out the motherboard design.


Before Apple does it: Bad idea. After Apple does it: Next big thing.


This happens in every industry.

Once a competitor gets away with a specific controversial or tricky to explain move, the others can pass through the same hole. Apple dropping the headphone jack or glueing the batteries were the same kind of moves, it expanded to other makers because they got away with it.

Or we could look at Twitter charging ridiculous prices for the API and Reddit jumping through the same loop.

In the CPU I think it's more nuanced, but there's definitely a tradeoff that many makers wouldn't do on their own.


They seem to have this unusual ability to get consumers to suddenly demand what used to be too expensive.


More importantly you can get the Board of (insert company here) to finally do X if you can show that Apple did X.


When alternatives are Windows laptops that barely work - it doesn’t seem so expensive.

Case in point. Back in 2018, me and my gf bought new laptops. I’ve opted for MBP and she bought Asus Rog Strix. Both were over 2k euro.

Two years after her laptop has constant issues with Windows updating all the time, WiFi modem dies until you power cycle whole machine, it randomly turns on at night or doesn’t go to sleep at all, fans that sound like jet engines even on lowest of speeds.

My MBP 2018 is still a champ after all these years, I had literally zero issues with it and the only reason why I’m contemplating upgrading is how amazing M processors are.

And seriously? More expensive? MacBook Air with 16GB of RAM and M processor starts at 1500 euro. It has far superior build quality, software and battery life than any Linux/Windows machine in any category. And on top of that it runs completely silently. Apple=expensive meme needs to die.


People who have $1500+ to spend on laptops forget that the vast majority of people don't spend (and can't spend) that much on a laptop. So much of the market is in that $500-$600 price range. You wouldn't be wrong to point out that an M1 or M2 Macbook is a far superior device, but it really doesn't matter.

And in terms of specs, Apple is still not an incredible value for the money. The laptop I'm using now is a last-years model Thinkpad. I bought it refurbished, with a two year warranty included, for $250. It's got a 12 core (edit: 6 core, 12 threads) Ryzen 5000 series, and for less than $150 I've added 32 GB of RAM (total RAM 40 GB), and a 1 TB NVMe disk (the computer has slots for two). That's an absolutely crazy deal for $400, and the build quality on these Thinkpads is miles above cheap Dell Inspirons or whatever.

For most people, this is a much more reasonable choice than spending $1500 on a laptop. My last laptop (Dell XPS) lasted over 12 years; I think the "laptop dies in two years" stories are the exception, not the rule, and Apple's laptops are certainly not lemon-proof, in that regard.


> It's got a 12 core Ryzen 5000 series

AMD's mobile 5000 series topped out at 8 CPU cores and 8 GPU compute units. I'm not aware of any laptops that used AMD's desktop CPUs prior to the 7000 series; there may have been some boutique OEM doing that but almost certainly not Thinkpads.


Sorry, it's 6c12t.


I have asus zephyrus M16-2021 and recently spent 50$ to install new 2tb nvme stick, on top of existing 1tb.

How much MacBook with 3tb storage costs?


You never seen Macbook users with USB drives taped to the lid of their laptops?


Apple=expensive meme will only die if it stops being true. For example, I can get a Thinkpad T14 Gen 4 with the best processor AMD has to offer, 32 GB RAM and a 512 GB SSD for less than the base M2 Air (€1250 vs €1300), which has a pitiful 8 GB/256 GB RAM.

If that's too expensive, an E14 Gen 5 has a previous-generation processor but can be had in a reasonable configuration (Ryzen 5, 16GB RAM, 512GB SSD) for around €700.

You can knock a fair chunk (€60) off of the price by getting them without Windows as well, which is a nice option when you're gonna install Linux anyway. But the prices I mentioned is before you even do that!


> Thinkpad T14 Gen 4

It would still have a garbage-tier (relative to mac) 1080p screen? Of course upgrade to OLED is only ~$250 which isn’t that bad

The E15 is way too plasticky for it to be a fair comparison also the screen is awful.

> But the prices I mentioned is before you even do that!

While it’s on sale sure. Otherwise the price is much, much higher https://www.lenovo.com/fr/fr/configurator/cto/index.html?bun...


I picked the upgrade to the low-power screen, which should look much better than the base screen, although probably not as good as a Mac one. OLED would cost an extra €100 compared to that, although it does have some compromises such as higher power draw. Also, current Thinkpads have 1920x1200 screens on most models now, not 1920x1080. And the OLED ones are 2880x1800, which sounds nice but honestly 1200p sounds nicer as it'd be usable without scaling.

Those prices in the French store look absolutely wild to me. In the Dutch store, the base model T14 Gen 4 AMD starts at 999, without any discounts visible!


> with 16GB of RAM

Which is not a lot.

> superior build quality

Is that really true? There are PC laptops with comparable or only slightly inferior build quality.

> And on top of that it runs completely silently.

That great. But it depends on your use case, thermal throttling can certainly be an issue. Also Intel/AMD have mostly caught up performance wise.


PC laptops with similar build quality cost the same or more.

x86 machines only compete when you don’t consider power consumption (which really matters for laptops). It’s so bad that most top end x86 machines won’t even try to hit those peak numbers unless the machine is plugged in and then only for short periods of time.

My M1 beats most x86 machines even after it throttles and it’s over three years old now. My M3 Max machine blows away the x86 machines I run into and it does it while still having good battery life.


> My M1 beats most x86 machines even after it throttles and it’s over three years old now

I doubt that. Last gen Intel/AMD cpus are pretty fast these days.


There are a few points here.

1. Most x86 machines aren't running the fastest CPUs. They are running U-series CPUs and usually aren't even running the best U-series CPUs. H-series laptops are relatively rare.

2. Even U-series machine have a peak power consumption north of 50 watts. H-series CPUs peak out at over 3x that (157w for Intel 13/14th gen HX CPUs). There's simply no way to disperse that much energy with the small heatsinks available (even in a gaming laptop).

3. Most of these machines downclock massively when not plugged in.

4. The laptop size required for an H-series CPU is massive.

Putting all of that together, an M1 machine is simply faster than most x86 laptops because they contain low-binned, U-series CPUs. For the small fraction that contain H-series CPUs, My M1 is still generally faster if both machines are being used as laptops rather than bad desktops. This performance delta gets even bigger if the workload sustains enough that the x86 machines begin to throttle because they can't sustain the insane peak power draw.


You are comparing the SW, not the HW. Windows 10, by default, sucks on any HW. You can make it to behave a little but you need special tools and knowledge (Ooshutup10 and advanced power modes).


>When alternatives are Windows laptops that barely work - it doesn’t seem so expensive. Case in point.

Yeah there are some junky laptops out there, but case in point I've only had sub-1000 Euro Windows notebooks from Acer, HP and Lenovo, and all worked fine and lasted no matter how hard I abused them. Maybe I'm just super lucky. Or maybe I was an informed consumer with a trained eye to spot the good from the junk. Or both.

>And seriously? More expensive? MacBook Air with 16GB of RAM and M processor starts at 1500 euro

16GB ram and how much storage?

Yeah, that's pretty expensive for the specs and EU purchasing power. Case in point, for Christmas I bought myself a new Lenovo with the new 4nm Ryzen 7840HS, 32GB RAM and 1TB NVME, for ~800 Euros sticker price, no special promotion or sale. That seemed like a good deal to me.

Now how much does Apple charge for 32GB RAM and 1TB storage? 2500+ Euros for 24GB RAM. Damn son! Apple are only price competitive at the bare minimum base models(8GB RAM on a "Pro" device?! Give me break Apple). When you start adding cores, RAM and storage, the price skyrockets.

Each to his own if the price is worth it, I don't judge. My friend who does a lot of video editing gets his money's worth, but for my uses cases(Windows, Linux, gaming, coding), that 800 Euro laptop will be more than enough, no need to spend way more on Apple for less functionality. Battery life on those Macs is tight tough. If you're on the road a lot, and don't need windows/linux stuff, I'd definitely get one.


> ~800 Euros sticker price, no special promotion or sale. That seemed like a good deal to me.

Sure but at that level you usually get poor build quality and/or thick plasticky chasis and a horrible screen


Mine doesn't have any of those issues. I think some people are stuck with the impression that all Windows laptops are stuck in 2006 and haven't kept up in terms of quality. Mine is slim, full metal build, 2.5k 16:10 IPS screen.

So what am I compromising on compared to spending over 3x more on a Macbook? Is the Mac display, trackpad and build quality a bit better? Most likely. Is it over 3x better? Definitely not.


> is slim, full metal build, 2.5k 16:10 IPS screen

I’m not sure you can get something like that for $800?

> some people are stuck with the impression that all Windows laptops are stuck in 2006

No, you can can certainly get very nice Windows laptops for 30% less than equivalent mac or so. I was only doubting the price


> I’ve opted for MBP and she bought Asus Rog Strix. Both were over 2k euro.

She bought a gaming brand. There's no such a thing as a good gaming brand and the issue isn't with PCs but buying something targeted at people who don't even use laptops as /laptops/ so the build quality is horrendous (particularly on the hinges, it's common for gaming laptops to have failed hinges but the consumer target doesn't care because the PC never leaves their desk).

There's no issue with Lenovo ThinkPad or Dell Latitude. There's a wide range of PC manufacturers out there, many of which make classes of computers Apple will never make, like the Panasonic Toughbook which are water, dust and shock resistant to the point where you can use them as weapons to bash someone's head and the computer will still run fine.

By the way, if you're unlucky enough to get a lemon, Dell and Lenovo offer /on site/ warranty service on their business class hardware (another perk of buying latitude rather than consumer targeted garbage like inspiron). You don't have to ship your computer back or go to a specific store like with Apple. They come to fix it.

> And on top of that it runs completely silently

Most PC laptops are quite silent, you can't get silence from a device that was made to push dedicated GPUs to their limits like a gaming laptop.

As for Apple's legendary build quality, the last macbook I've owned before ditching the Apple ecosystem entirely was one that was affected by both the keyboard dust issue and the short display cable that bends too much whenever you open the monitor.

https://www.ifixit.com/News/12903/flexgate

https://support.apple.com/keyboard-service-program-for-mac-n...

Yeah, great build quality. I think it's the first time in my entire life I saw a keyboard fail so quickly and hard. And that display cable.. who designs things like this? It should have been obvious to anyone who designs computers that putting that much tension on the cable was going to make it rip. I can never rid myself of the suspicion Apple makes hardware designed to fail after warranty/Apple care period after this.


They don't refer to it as a cult for nothing.


Or they have enough leverage for good contracts, and a lot of money on hand to be able to make semi-risky bets that others would not be willing to. Combined with good marketing and reputation.

That enables them to take bigger bets than most, and their reputation means customers are more likely to buy into their big bets.


If that’s what you call quality products, then I’d rather be a part of cult.


Quality products which can be bricked on system update and have so many issues that everyone says to not upgrade for at least 6 months after major release!

Also their software is garbage if you need to connect to or use something outside of apple ecosystem.


MBP user at home for 6 years and switched at least 3 MBPs at work (switching companies and generations of MBPs). iPhone user for 3 years. My gf has been using iPhones and iPads since who knows when. A couple of friends who are Apple users. I have heard zero complaints about bricked device.

I routinely heard and experienced issues with Windows and Android, though.


> If that’s what you call quality products

Well, those in the cult sure do.


An extremely large cult that has a spillover effect into larger society


> This is exactly the reason - and perhaps they finally can do it because "Apple showed the way" with the M-series.

This isn't generally what you want as a customer though. Most current games will need something in the range of 8-16GB of VRAM, but will need that in addition to whatever you need in terms of main system memory. If you also want 16GB of system memory, you need 32GB in total but only benefit from 16GB being fast, and you might often want more than 16GB of system memory.

Apple is fine with this because then if you want a lot of RAM they can charge you an arm and two legs for that much of the fast stuff, and solder it all so you can't upgrade it. But what you really want is to solder a modest amount of the fast stuff to the CPU, put it in the CPU socket (so even it can still be upgraded by upgrading the CPU, and then the socket doesn't need all of those pins) and still have the same DDR slots as ever to add gobs of the cheap stuff to suit.


I don't think LPDDR5 is that much more expensive than DDR5 on a per-GB basis. The performance advantage is mostly from a wider memory bus, and a bit from being soldered.

In other words, the CPU having access to huge amounts of memory bandwidth is not really costing the users much, after they've paid for the GPU and its memory system. The cost is really just the incremental cost of enlarging a GPU die by strapping some CPU cores on the side.

It may not be precisely most cost and performance optimized configuration for gaming, but on the other hand it provides huge benefits for anything that doesn't fit in a cheap GPU's VRAM (eg. LLMs).


> I don't think LPDDR5 is that much more expensive than DDR5 on a per-GB basis.

It is by the time the OEMs are through with it, and once you're soldering it you have to buy it from them.

> The performance advantage is mostly from a wider memory bus, and a bit from being soldered.

Even the little bit from being soldered should go away when CAMMs enter the scene. But the wider memory bus is part of what makes it expensive. It's not just the input pins on the APU, it's routing all of those pins through the socket and the system board and the memory DIMMs. Which you can avoid by putting some amount of HBM on the APU package without losing the ability to add memory via the narrower bus in the traditional way.

> It may not be precisely most cost and performance optimized configuration for gaming, but on the other hand it provides huge benefits for anything that doesn't fit in a cheap GPU's VRAM (eg. LLMs).

LLMs are nearly the only thing that currently uses this, and there is nothing stopping a CPU option with a large amount of HBM on the package from existing and fitting into the socket on an ordinary system board for the people who want that.

But CPU makers also don't really want to make a thousand SKUs with every possible combination of cores and memory, so what they're probably going to do is have lower core count CPUs with smaller amounts of HBM and higher core count CPUs with more. In which case you can still buy the $100 APU with a modest amount of HBM and then add as much DDR as you want via the slots on the system board instead of having to buy $1000 worth of CPU cores you don't need, or replace a perfectly good CPU with a different one, just to increase the amount of memory for the majority of applications that aren't bounded by memory bandwidth.


> I don't think LPDDR5 is that much more expensive than DDR5 on a per-GB basis.

It's also worth pointing out that while Apple is using LPDDR5 and getting more bandwidth out of it by just using more pins, that isn't that most GPUs use -- they use some version of GDDR, which is more expensive on a per-GB basis than DDR, because it has more bandwidth per pin.

So putting some "fast" memory on the APU package has the further advantage that you don't have to do what Apple did, you could actually put HBM or GDDR on the CPU package, and then the faster memory would be faster.


Isn't the main advantage of soldering RAM the speed you can run the bus at?

I don't see how it affects width much when we're only talking about 4 slots and 256 bits.


Doubling the bus width means more wiring and probably more PCB layers than offering two slots per channel.

Right now, using DDR5 SODIMMs instead of LPDDR5 would throw away half of the benefits of doubling the bus width, so that's a non-starter. Using CAMM style modules may allow for full-speed operation, but those have almost zero adoption so far and I think it's too soon to tell how widespread they'll become.

Whatever option for connecting the DRAM, doubling the bus width doubles the amount of DRAM the OEMs must populate the system with, adding another cost on top of the more expensive SoC and MLB.


> Doubling the bus width means more wiring and probably more PCB layers than offering two slots per channel.

Wouldn't that be offset, at least on desktop processors, with more slots and memory channels?


No. Desktops make the cost issues even worse, because supporting twice the memory bus width not only means more wires, but also a larger, more expensive CPU socket, and maybe a new heatsink mounting standard.


Shouldn’t you get more bandwidth by having all memory slots populated?


Current PC desktops have two memory channels and often four slots, because each channel has two slots. You get more bandwidth if you populate both channels, but four isn't better than two. If you wanted four channels the CPU socket would need more pins, because right now the second and third slots use the same pins as the first and second.

What you could do is put some fast/wide DRAM on the CPU package. Then you could still upgrade the fast memory by upgrading the CPU, and the pins don't have to go through the socket. This should also be compatible with existing CPU sockets, and not incompatible with also having ordinary DDR in the existing memory slots. You basically get an L4 cache made out of HBM, which is available to the iGPU.


You get more bandwidth by having all memory channels populated. When you have multiple slots per channel, filling all slots at best gets you a few percent better memory performance due to having more ranks of memory, but for DDR5 platforms the extra load on the memory controller due to multiple DIMMs per channel brings a big hit to the (officially supported) memory clock speeds.

The fastest memory configs for a desktop currently are using one single-rank module per channel. Using one dual-rank module per channel doubles the capacity for typically a slight performance penalty.


Is there any literature on this you would recommend?


Also the space for traces is a problem with sockets. At 256 bits you need lots of traces with very precise equal lengths.


If you soldered DDR5 memory, wouldn't you still need to have a lot of precise trace lengths and a lot of wasted space to make that match across banks of chips?

If the solution is to compensate for delays in a smarter way, then I would assume that solution could be applied to sockets too. Which goes back to asking what took them so long.


IIRC, some of that can be configured with the memory controller - doesn't it check the bus to make sure the timings are known in the board setup code?


Now that CAMM is a thing, there's nothing that forbids it from being used for non-laptop machines.


Unfortunately CAMM appears to have a larger footprint than a DIMM slot.


The footprint of a DIMM slot is not just the area the slot itself takes. DIMMs force you to do a lot of length-matching, which takes a lot of PCB space (and layers). The actual footprint of the DIMMs also includes all the space between the slots and the CPU. Especially LPCAMM2 does this inside the connector, and you can place the connector as close as you can without causing clearance issues.


>> Unfortunately CAMM appears to have a larger footprint than a DIMM slot.

We need a new motherboard form factor that assumes no PCIe graphics card. I like my ITX box and put the power connector and button where the expansion slot would normally go because I used an APU. But we could do a lot better with the board size by assuming integrated graphics. A lower riser on the back would be nice too - maybe only 2 USB high.


At that point, you can get a complete mini PC with a higher end laptop APU with a raised power limit anyway.


We need a new form factor that allows a small motherboard to be parallel to a back-mounted GPU/PCIe. This is what people are already trying to do with SFF cases and those wonky PCIe cables/boards.


True enough but not so so awful.

> Nominal module dimensions listed in the standard point to "various" form factors for the modules, with the X-axis measuring 78 mm (3.07 inches) and the Y-axis 29.6–68 mm (1.17–2.68 inches).

From Artechnica. Whereas a regular DIMM is 5.5 inches long & narrow on the Y.


I think CAMM direct comparison is SO-DIMMs where the DIMM is parallel with the mainboard.


Yeah same. Especially that those consoles were launched all the way back in 2020, nearly 4 years ago.

I guess it's difficult to replicate the console performance when those run on GDDR as system memory and PCs on DDR so the bandwidth difference is huge.

But it's not like a system integrator like Asus or Gigabyte couldn't have built a small form factor PC based on a PC variant of the consoles APU and use GDDR instead.

AMD really missed out on this, especially during the pandemic when everyone was stuck indoor trying to game and looking for inexistent GPUs.


GDDR isn't necessarily all that better on use for general purpose CPU.

Most importantly, buyers didn't want machines with soldered memory, and that was main method of going with wider bus without prohibitively expensive boards etc.


Xbox One came out in 2013, 11 years ago, with an AMD CPU.

PS4 came out in 2013, 11 years ago, with an AMD CPU.


Yes, but what's your point with those facts? Those were Bulldozer based APUs, not very good compared to what you could buy for PC back then, in an era where AMD wasn't very competitive and GPUs weren't overpriced unobtanium.


Just pointing out that AMD has been shipping cheap APUs with nice memory systems that increase iGPU performance for over 10 years.

Just mystifies me that AMD hasn't shipped similar in a laptop, SFF, or desktop yet. Especially during the GPU shortage. The AMD 780m iGPU does pretty well, clearly AMD could add more cores and ship a faster memory bus, like the ones on their Threadripper (4 channel), Siena (6 channel), threadripper pro (8 channel), or Epyc (12 channel).

Apple's doing similar with M1/m2/m3 (128 bit), M1/M2/M3 Pro (256 bit), and M1/M2/M3 Max (512 bit wide) available in laptops, tiny PCs (up to 256 bit wide) with the mini, and small desktops (studio).


They were not Bulldozer but Jaguar APUs.


Jaguar was the APU console name but it was based on Buldozer CPU architecture.


Jaguar was the codename for the CPU core microarchitecture. AMD had two separate families of microarchitectures, much like Intel's Core and Atom families (since united in one chip in Alder Lake and later).


This was a different era with southbridge and stuff, but what about the Nvidia Nforce2: https://www.wikipedia.org/wiki/NForce2

Based somewhat on Xbox 1 design.


What's to stop someone from taking a mid-high end dedicated graphics card and soldering on a CPU, RAM, and IO? (Do they do this already?) I know it sounds like sacrilege, but I suspect most people upgrade their mobo/CPU/RAM/graphics all at once.


Soldering doesn't do anything about pin count. An off the shelf GPU chip is still going to support at most a PCIe x16 interface to other chips, which isn't enough to share DRAM with a separate CPU.

Taping out a new chip that combines existing GPU and CPU IP blocks is relatively straightforward, but taping out a large chip is extremely expensive.


What market would they be targeting?

People who want casual gaming buy consoles.

People who want gaming-level performance buy discreet GPUs.

AMD makes more money selling a CPU and GPU.


Why do they not use HBM and unified memory like Apple? I assume it's for cost reasons.


Apple does not use HBM, just LPDDR5 with wider interfaces. Xbox X/One, PS4/5, AMD server/workstation chips (threadripper (and pro), siena, and epyc), and apples above the base model m1/m2/m3 all use wider memory interfaces.

AMD agpu (and Intel with iGPUs) do use unified memory. As well as the high end AMD (MI300A) and Nvidia (Grace+hopper) server+iGPU chips.

This REALLY helps GPUs performance which is very bandwidth intensive, but even normal workloads have cache misses. Part of the benefit isn't just more bandwidth, but that you can have 4,8, or 16 cache misses running in parallel which can help keep more of your cores doing useful work instead of waiting on memory. This doesn't decrease memory latency, but can help improve throughput when using multiple cores.

Sad that even a M3 pro (the midrange, 3rd fastest in the apple M series lineup) available in Apple laptops or mac minis has more bandwidth than every AMD Ryzen and every Intel i3/i4/i7/i9. Even Intel's fastest i9-14900KF using a maximum of 250 watts has 1/8th the memory bandwidth of the M2 Ultra, 1/4th the bandwidth of the M3 max, and 1/2th the bandwidth of the M2 Pro. Same (or less actually) then the MBA or base Mac mini.


Doesn't the Deck have quad channel memory?


Yes, as does any most DDR5 systems, but they are still 128 bits wide or 4 x 32 bit channels.


Ah! I thought 64bit buswidths had been standard in DDR memory for, well, a very long time now? Why did that change? Sounds like nothing much changed actually, in terms of total (bus) bandwidth.


Yeah, up to DDR4 it's standard, or at least common to have 64 bit channels. But with DDR5 it's standard to have 2 32 bit channels per dimm. Unfortunately many confuse number of dimms with number of channels, which is wrong for DDR5.


In the graphics benchmarks it performs mostly worse than the 5600g/5700g. That's pretty disappointing 3 years later.


It's the lowest end of the consumer APUs. The 8600g and the 8700g are those models replacements.


The 5600G had 7 graphics compute units, the 8500G only has 4; if it performs within 10% of the 5600G, the compute units are almost 60% faster than 3 years ago.


They had better be; 5000G chips are based on old GCN Vega stuff, while these are based on the latest RDNA3 architecture. They're also clocked MUCH higher, as AMD focused a lot on clock speeds when developing RDNA2.

In fact, pretty much all of the performance gain going from a 5700 XT to a 6700 XT is based on the latter's higher clock speed, as their specs are very similar otherwise (although a 6700 XT has 12 GB 192bit memory and a heap of cache, whereas the 5700 XT has 8GB 256bit memory that runs a small bit slower, and crucially, way less cache).


Yeah it's a shame, you really have to step up to the 8700G (or at least the 8600G) to get the true APU experience with modern graphics performance. That said even the bottom of the range significantly outperforms Intel's integrated solutions, and I support anything that gives them a kick in the pants to ship better iGPUs - one step closer to my work laptop being able to run the Windows desktop at 4k without turning into a laggy mess.


At least you hope, it is more power efficient due to advanced manufacturing process. And then a totally disappointment: new CPU consumes significantly more both under load and idle. The AI-extensions are also missing. ECC is missing in all 8300G/8500G/8600G/8700G.


Got a link for the ECC claim? Last I read, the ECC issues with the AM5 platform were mostly fixed (at least many vendors are claiming ECC compatibility again, which they didn't a year ago).


I've just searched for official ECC support on * 8700G https://www.amd.com/en/product/14066 * 8600G https://www.amd.com/en/product/14071 * 8500G https://www.amd.com/en/product/14086 * 8300G https://www.amd.com/en/product/14091

5xxxG are also known for missing official support, only PRO favors have it.

Since ECC is for robustness I'd really prefer when both AMD and ASUS/ASRock/Gigabyte/MSI/... mention ECC support in the specification. Last time I've checked, only ASUS did mention it for consumer AM5 mainboards. (Gigabyte did it for (at least) one simple but expensive server mainboard.) While for AM4 it wasn't such an issue for other manufacturers too.


Historically, only the PRO variants of AMD's APUs have supported ECC.


If comparing to 5600g/5700g, there's no difference in AI extensions or ECC. AI is new for 8600g/8700g, and ECC is only in "pro" APUs.



One of the CPUs of all times

Edit: the title now has been fixed


Definitely one of the CPUs ever made, possibly ever.


These days I noticed that my laptop, which has 32GB of RAM, is only capable of using 16GB of RAM, because the other 16GB are reserved by Windows 11 for "shared GPU memory"

This Dell Precision 3541 with an i9-9880H and a 4GB VRAM Quadro P620 runs just fine with Ubuntu, but in Windows 11, as I said, the integrated GPU as well as the Quadro get the "shared GPU memory".

For me the 4GB of the Quadro, which Chrome and other apps are using, is more than enough. The iGPU barely gets used and the BIOS has reserved 128MB of dedicated memory for it.

Windows doesn't allow me to change how much memory is assigned to the "shared GPU memory", so 16GB of RAM is empty while the other 16GB are getting stressed to an extreme, so that the machine becomes unusable.

Now, since this 5800G has an integrated GPU, I wonder if it has the same problem with Windows 11.

Any Windows experts in here who know how to fix the issue (the registry fix with "DedicatedSegmentSize" has no effect)?


I think you might be making a confusion here.

I have an AMD Ryzen APU laptop also with 32GB of RAM and also with Windows 11 and yes just like in your case, it says in task manager GPU tab that 16GB are shared GPU memory, but like the name says, it's shared and not reserved, meaning the GPU can use up to 16GB of system RAM for textures and stuff but CPU apps are allowed to use nearly all the 32GB of RAM minus what you reserved exclusively for the GPU in BIOS and what the GPU is currently using on top of that.

And I know because I saw apps (not games) use nearly all my 32 gigs of ram.


But then why do my apps not use more than 16GB RAM? Everything together never surpasses 16GB RAM (maybe a bit, because it gets compressed). And in the meantime the GPUs show <1% RAM usage.

And Ubuntu has no issues with it, while Windows crawls on its knees once that limit is reached.


Often that reserved memory is a soft limit that programs try to respect, but they can force allocate more. So what happens when you disable page file, close everything, open python3, and type this?

array20gb = bytearray(20 * 1024**3)

This should allocate 20GB of RAM to that process, starting with unused unreserved RAM, and then it should take your shared video memory after the unused unreserved RAM is allocated. If you get an error, then you know that reserved video RAM can't be used by programs.


Thanks! This is odd, and indeed it does start filling the RAM.

What could it then be what hinders the allocation of more than 50% under "normal operation". Initially it's fast, but over the course of a couple of hours if gets so low, that it takes around a minute to open a folder from the desktop via double-click while memory is at 50%. Temps are OK at that time and nothing is really getting used, like SSD is at 1-5%.


Probably system or driver issue resulting in lending/grabbing the maximum amount for no reason. Performance-wise, 4 GB dedicated video card gets no benefit from those extra gigabytes in RAM buffer anyway.

Do Quadro cards support some kind of “fat and slow” compatibility mode for software that demands giant memory size on a professional card to fit all the data there, and can wait to get the results? Maybe there's some switch you've toggled to enable “20 GB VRAM” mode?


You see? Bullshit issues like these is why I don't buy laptops with discrete GPUs anymore, especially Nvidia.


No idea. Maybe a driver issue. Have you played around with drivers? From Dell's website versus Intel's website.

Have you tried a fresh install of Windows 11 and not an upgrade from 10?

Ask on the Dell forums or subreddits. Maybe there's others out there with this issue.


Are you sure the video memory is actually in use?

Like turn off Windows swap and use some application that will allocate around 24GB of RAM.

Windows always shows shared GPU memory as half the RAM, for example mine shows 64GB shared on my 128GB box, yet I have no problems loading up 90-100GB of VMs and running them full blast.

Now, the only thing that could be causing a problem here is if you're using any driver written by Dell themselves. They could very well be causing some kind of reservation bug that's not a direct windows issue itself.


runs just fine with Ubuntu

Maybe that is the answer?


I believe most (all?) machines allow you to configure this through the UEFI interface (pre-boot). Search your laptop model and you should find a way to change the amount of RAM dedicated to the integrated GPU.


Dell's BIOS has no option to change it, but it says that it has reserved 128MB for it.

As I said, Ubuntu has zero issues with handing 28GB+ RAM to applications, so this is convincingly a Windows thing and not something related to the BIOS.

While searching for a solution there's another person having this issue with a Gigabyte Z97-D3H-CF mainboard. He has 32GB RAM and half of it goes to "Shared Video Memory". I downloaded the manual and the relevant section reads

`Intel Processor Graphics Memory Allocation: Allows you to set the onboard graphics memory size. Options are: 32M~1024M. (Default: 64M)`

So the mainboard has no option for 16GB "Shared Video Memory", yet Windows 10 is causing this problem to him [0].

For me the "Works in Ubuntu" is what makes me disbelieve the BIOS-settings claims.

[0] https://answers.microsoft.com/en-us/windows/forum/all/change...


The reserved memory would show up as 'dedicated' memory. Shared is just the amount of host memory that can be assigned to graphics resources, which usually equals the system memory or some amount derived from it.

If the full amount of system memory isn't showing on Windows that's likely an unrelated issue you're experiencing (for example with UEFI/BIOS memory mapping mismatching whatever else) and it working on Linux implies that either Linux gets fed different memory layouts or it parses this broken case fine unlike Windows.

If applications aren't using all the memory, and it's also not showing up as cached, that's odd as Windows usually tends to target around 80% of physical memory usage (unless you're really not using that many apps or there's another driver issue going on). Different OSes account for memory usage differently, and there's rarely one single 'memory used' indicator in modern operating systems.


> If the full amount of system memory isn't showing on Windows that's likely an unrelated issue

Full amount of system memory is seen by Windows. It just refuses to hand out more than 16GB of RAM to it so I assume (maybe wrongly) that this is related to the other 16GB which the GPUs get assigned to but have no use for.

The memory graph in the task manager (Performance) never goes above 50%.


Usually you can modify the amount of RAM dedicated to the GPU in BIOS.


This review seems to miss one of the most important competitors. The Ryzen 5 7600 costs only $20 more, and performs a good deal better for not a whole lot more power. I don't think there is a lot of customers to whom the iGPU difference is important, but not important enough to get a discreet graphics card.


> I don't think there is a lot of customers to whom the iGPU difference is important, but not important enough to get a discreet graphics card.

I think this is the exact segment of customers this chip is aiming for. You can throw this chip in a new build without a dedicated GPU, and get a playable (30+ fps) experience in Cyberpunk 2077. It's an option for someone wanting to spend entry-level money on a gaming PC with the option to upgrade to a real GPU further down the line.

EDIT: Just realized this is the 8500g not the 8600g. The slower GPU and the fact that dedicated graphics would be limited to 4x PCI-E lanes make this a much less interesting as an entry-level gaming product.


Yeah, the 8500G is just not interesting in that sense at all. It has only 4 CU's, which is twice what a 7600 has but still not enough for anything even remotely graphically demanding. An 8600G has 8 CUs which starts to approach usable territory.


The power and cost of the discrete GPU alone in that scenario will far outweigh the cost of the APU here. The nice thing about the APU line is you can throw them in cheap mini PCs and not have to worry about having PCIe slots or "real" PSUs, you get a good CPU with a better-than-basic GPU without having to shell out double for a GPU or having a big PC to do it in. They are also monolithic on top of being combined parts so not just the active but the idle power is highly optimized.


I got a Ryzen 7 2700X, cus it seemed to have good benchmarks for the price.

I have a vague feeling I might have been happier with less cores but faster ones. A lot of games put all, or almost all, their load on one core only, particularly older games. A lot of old games have amazing modpacks for them nowadays and I often run into CPU bound slowdown and lagspikes in stuff like Stalker GAMMA, Rlcraft, Supreme commander forged alliance forever.

Overall I just dont know much about CPUs and just got one that was recommended by people at the time.


If there's an updated BIOS for your board, going to a 5700X3D or 5800X3D is a drop in replacement (maybe a third party tower cooler if you're running stock.

Would be a noticeable experience difference, especially in games.


I'll add that this will likely only make a significant difference if your GPU is relatively powerful. If you're still rocking an RX 480 or similar, the GPU is going to be your bottleneck in most games and upgrading the CPU won't make as big a difference. I went from a 1700X to a 5900X with an RX 580, and I only noticed a difference in one game until I upgraded the GPU.


5700G might be a great upgrade from a 2700X.

Better performance at lower TDP and the integrated graphics are impressive. Even if you do end up with a discrete GPU, being able to do passthrough with a secondary can be quite useful.


Why would you get a 5700G from a 2700X? When you have a 2700X you already have a dedicated GPU, which is probably a lot better than the integrated Vega 8. Yes, even if it's an older card like an RX 580. I'm pretty sure even something pitiful like an RX 550 will be at least as good.

As such, you're better off getting e.g. a 5700X or even some X3D chip, which are much faster. The 5000G processors are slower than their 5000X or 5000 counterparts, since they have less cache.


Fair enough... I tend to do GPU upgrade mine often than system, wasn't thinking about it. That said. Still a big difference for a lot of use cases.


I also had one, it was a great CPU for those days. But several generations passed, games want more performance from CPU and GPU, I retired that 2700 in a server that sees less utilization.

For the past 30 years the best CPU of a few years ago can have less performance than the low end of today, that is evolution.


It's only just below $200 though. In my opinion, it's too close to the price of the 8600G which has a dedicated NPU, twice the graphics core count and, more importantly in my opinion, has twice the available PCIe lanes: the 8500G only has 14 in total, of which 8 go to the southbridge -- the other 6 will be shared between the PCIe x16 slot and the first NVMe drive.


Good opportunity to remember that AVX-512 is more than 512 bit vectors. Too bad that Intel is dropping it from their consumer CPUs.

https://mastodon.gamedev.place/@rygorous/110572829749524388


A few months ago Intel announced AVX10, which is basically AVX-512 but without mandatory 512-bit vectors:

https://www.tomshardware.com/news/intels-new-avx10-brings-av...


Isn't AVX 512 very useful for emulating PS3 and Xbox 360 games?


The extra instructions that came along with AVX512, yes. The 512-bit vectors themselves, not so much, but it still helps a little.


I just want to see an X3D series APU with a gigantic video cache.


I think it's gotta be big, to be worth it. The X3D dies are tiny and the additional SRAM is flanked by unused silicon to keep the package a uniform shape.

The lowest end Radeon 7600 still has 8MB of infinity cache.

Giving an APU 64MB of cache would be a mistake at current performamne levels, giving it 4MB might not be worth the time or silicon or the 4MB SRAM die floats away from the wind generated by the place and pick.


That would be irrelevant, tens of MB of cache do not cover for Gigabytes of slow memory for textures. Modern games use 4-8 GB of RAM, mostly for HD textures. Screen buffers are already in the cache.


Does hardware accelerated video decoding woks in Chrome on Linux for that CPU?


I've honestly had bad luck with this with _any_ CPU


I take a shotgun approach and install:

- libva-mesa-driver - mesa-vdpau - intel-media-driver - libva-intel-driver - libvdpau-va-gl

And hope _one_ of them works


Hardware accelerated decoding works fine on i5-12500.


A reminder that rocm is getting better! Great news for GenAi budget users


a surprisingly what


Surprisingly sub-$200


I definitely didn't expect it to be so sub-$200.


Surprisingly fascinating according to the article. It does seem like great performance for the price and Watt.


Limitation of HN title lengths probably. Or OP's capability to edit titles.


HN automatically filters words from titles that they think are clickbaity. "Fascinating" is probably in that list.


That's pretty stupid, no? Why doesn't it alert the poster of the clickbaity word prompting him to remove it before being allowed to post it, instead of silently removing it without the poster's knowledge ending up with janky titles?


With the approach you're proposing, we'd possibly start to see titles like "AMD Ryzen 5 8500G: A Surprisingly F*$cin@ting Sub-$200 CPU".


Why? People could also do that now if they wish. You can't stop malicious compliance with a poorly implemented filter. And for that you have the flag button and dang who deletes them.

But if you do have a poorly implemented filter, why not let your users know the rules instead of them finding out through obscurity?

Openness is better than obscurity, no? Isn't that the FOSS ethos?


I prefer that to "a surprisingly sub-$200".


The HN ethos is to cultivate curiosity, so the word "Fascinating" seems on brand, even if it does help a bit to draw attention.


What are people using CPUs for?

I mean I have a crappy 7 year old CPU that I use for single threaded dev, and it works for 99% of cases. In the 1% of cases, I threw on multithreading and its fine.

Now my daily driver has a GPU and it unlocks new possibilities.

Its been over 10 years since I've really spent time thinking about a CPU.


>> What are people using CPUs for?

I compile C++ code with an AMD 2400G (zen1+) and parallel builds are way faster than single threaded. I also used the same box to add a bunch of parallel paths to the code (using OpenMP) which improved performance of the app by 3x-4x in some cases with little effort.

Compiling a full build is 30-60 seconds for me. If I turn on LTO it's several minutes and that's with parallel builds -j

I'm aiming to get a Zen4 or Zen5 with 8 cores one of these days. That should give about 4x speed boost. The new GPU has AV1 support too!


> and parallel builds are way faster than single threaded.

Not only that, but parallel builds on newer CPUs is way faster than parallel builds on 7 year old CPUs, even at lower clock speeds.


With the release of Ryzen in 2016/2017 the bare minimum upgrades Intel was pushing YOY no longer worked in the market. Since then the CPU space has been very exciting, you've been missing out.


Compiling rust code is slow and cpu bound. I have a fast cpu in part to improve compile times while I’m programming.


Did you experience a big gap between an older CPU and the current one?

I did not change mine for 6 years, but looking at some benchmarks, it looks like compilation times would not get more than a few percents faster per generation. I expect to get maybe 20% faster with a recent CPU in the same price range.

Until 2005 I was updating the CPU every 2-3 years, then every 5 years, and when I'll get the next one I plan to keep it 10 years easy.


I went from 2013 MBP to M3 Pro MBP and the performance improvement in Rust / rust-analyzer was staggering.


There is a huge difference between my laptop and my desktop in using VS Code with a few basic plugins. The difference of the hardware between the 2 machines is mostly CPU and SSD performance, the SSD makes a big impact opening projects with many small files, but the CPU is used for IntelliSense or even when you copy/paste a few thousands of LOC from one file to another (laptop fan gets to max for tens of seconds).

This is the biggest difference I see today. In the previous generation it was a huge boost in everything just becauase the old CPU was really old. I also keep CPUs 5 years or more.


I’d be curious if your old cpu uses an NVMe or SATA drive. I’ve seen a 20% increase on big projects from NVMe alone - I suspect due to the big I/O queue depths of NVMe.


Modern CPUs also have more cores. Make and Cargo can keep a threadripper fed for a few minutes with a task that would run an hour on a laptop.


A new CPU would probably have double the core count (16c/32t) of your current CPU. C++ compilation can use all the cores.


Gaming, video editing, music production, 3D modelling, compiling C++, etc.


I do astrophotography and stacking images is the only time I notice CPU performance. It can take close to an hour depending on how much data, so I just go do something else.


A lot of games still bottleneck on CPU.


especially if you are trying to get high framerates


Or trying to run large maps in sim games.

Speaking as a 7800X3D owner, I don't think any CPU on the planet is sufficient for modded Rimworld yet.


I have a laptop with a 4080 and AMD chip. With Nvidia Prime, you can basically use the chip integrate graphics that are a lot more power efficient, and make the battery last way longer.


Whenever bazel prints that it’s compiling one of the rust parts of the codebase at work, then I mutter something about needing more faster better CPUs, and go fetch a cup of coffee..


Why not remote builds? https://bazel.build/remote/rbe


Modern typescript based web-app … you’re going to want the fastest single core speed cpu & nvme possible to make it not suck.


Actually... machine learning.

One would think its all GPU. But you'd be suprised how single-core-speed limited parts of certain tasks are.


That's why most of the review sites never show older, just current, maybe previous generation of cpus with reviews.


This is and article from Phoronix, a site with more test results than you'll ever need. You can probably find some coffee maker performance in Tux Racer in the database (if you figure out how to use it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: