Hacker News new | past | comments | ask | show | jobs | submit login
AMD Introduces Next-Generation AMD Ryzen 7 9800X3D Processor (amd.com)
69 points by doener 43 days ago | hide | past | favorite | 27 comments



My computer is not even one year old and my 7800x3d is already two numbers behind? Wow.

Turns out it was all a waste because I haven’t played anything anyway, being an adult and all


The 8000 series was just APUs


I purchased the 5800X3D last year and have yet to replace my 2600. I did make minor upgrades to my RAM from 4x DIMMs to 2x and bumped my SATA SSD to M.2 NVME.


There was no 8000 series, and 9000 has largely been a wash.


There is an 8000 series. It even has a newer, better IMC.


Let's be honest, putting the hottest part at the top of the chip where it can be in contact with the heatsink is not the kind of thing AMD could have failed to think about until now. There's surely some reason why they didn't do it this way in previous generations? Does anyone know why?


The Gamers Nexus video on the 9800X3D said that previously, X3D SKUs were kinda skunkworks things that happened after die development. And that they have wanted to do this since 2019. Zen 5 is just the first time they had a chance to implement it because the X3D team had input on die design.


Ryzen X3D was the skunkworks project not 3d cache. X3D was developed for HPC and EDA industries. They also had plans for 3 hi stacks but I guess a customer for it never materialized.


I wish they put the kind of effort they put into Ryzen, into Radeon. Legitimate ray tracing competition, and something at least approaching what CUDA does. I know they already do ray tracing, but the disparity is large. And I know they're already working on GPGPU tooling, but seemingly at a snails pace.


IMO ray tracing remains a gimmick and the die space would be better spent on more normal compute hardware. Basically every game that doesn't have a day/night cycle can prebake their lightning and have it look nicer than any possible real-time ray tracing. Ray/Path tracing does look nice in Cyberpunk, though.


One thing mentioned is that the cache die is now the same size as the CCD, when previously it was notably smaller and "dummy silicon" used to fill in the gap and get everything to the same height.

I can see "stacking" larger dies on top of smaller being extremely difficult to align. Perhaps the trade off was made to allow a smaller cache die (so lower cost) at the cost of requiring it to be "on top".


Maybe the size change was to ease the via design? ie between the pins on the bottom and the cores above


Yeah the reason is the cache doesn't need any connections outside the chip, and those connections are on the bottom.

With this solution they now need a load of vias going through the cache die to the pins at the bottom.


I would really love to see some kind of PoC of running fully in cash a program. Having 104mb of you can run some performance critical stuff entirely from cash.


Most binaries you encounter today fit in this cache. This is what makes AMD's x3d chips so fast in games. And you're off by 8x on the size, it's 104MB.


Binary is not interesting. Data it operates on is.


Since were being pedantic. Is that 2^10 bits or 10^3 bits per MB?


AMD writes MB, and they list the faster caches' sizes in KByte, where indeed I think it should be Mi and Ki respectively. And in the NV space (hdds, ssds) a GB means 10^9. Reminds me of DDR vendors telling us that memory runs at 8GHz (8GT/s, 4GHz in reality).


If we are being padantic 1 MB is 10^6 bytes. 1 MiB would be 2^20


I thought of that soon after my comment but I felt my point was made even better because of the bad maths



Dram cache is not comparable to sram cache.


I'm very curious on why games benefit so much from a 50% larger L3 cache, I'm checking the technical aspects of Ryzen 7 9800X3D and Ryzen 9 9950X, L1 and L2 caches are smaller in the X3D one while the L3 cache is bigger.

Can someone help me better understand this? Okay, the CPU can move more things to the cache die when building the frames, less memory fetch, but it seems the performance difference is too big (according to the reviews) for +32MB L3 when comparing those two CPUs.

64MB of L3 on 9950X is still A LOT of cache.


Really looking forward to 3rd party benchmark results.

Hoping to replace my i9-9900K this generation, but the Intel Core 9 Ultra 285K was incredibly disappointing, and the Ryzen 9 9950 wasn't that great, either.

Only 8 cores seems a bit of a let down, when the 9950 has 16, but apparently it parks half the cores during high load which seems to defeat the purpose of having so many cores, so maybe it's not a big deal.


Current rumor is that the 9950x3d will be released early next year. Personally I'll take less tdp, but I'm only using this system for gaming.


Why did they mention "gaming performance" (i.e., frame rate change which isn't always directly connected to CPU performance) and not absolute performance (across a range of benchmarks)? The latter is more meaningful.


1) It's normal in these benchmarks to ensure the GPU is not a bottleneck, and use games that are primarily CPU bound.

2) The X3D CPUs are primarily aimed at people obsessing over getting another hundred FPS out of a ten year old game. The places where these CPUs outrun a similar priced CPU with standard cache design are niche, and specific games are one of the prime niches they do well in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: