The benefits of X570 over B450 therefore have nothing to do with GPU performance but instead would be either overclocking capability or, more significantly, I/O to everything else.
B450 only provides 6x PCI-E 2.0 lanes and 2 USB 3.0 gen 2. That's not a lot of expansion capability, especially with nvme drives. Want 10gbe? Or a second nvme drive? Good luck.
X570 gets to leverage double the bandwidth to the CPU in addition to being more capable internally. So you'll see more boards with more M.2 nvme slots as a result, for example. And thunderbolt 3 support. Check out some of the x570 boards shown off - the amount of connectivity they have is awesome. That's why you'd get x570 over b450.
Most people do not need a second nvme drive or 10GbE.
The thing is that most things aren't (currently) bottlenecked by PCIe 3.0. A 2080 Ti shows about 3% performance degradation by running in 3.0x8 mode. 4 lanes of PCIe 3.0 is 4 GB/s (32 Gb/s) which is plenty for 10 Gb/s networking... or even 40 Gb/s networking like Infiniband QDR (which runs at 32 Gb/s real speed after encoding overhead). So you can reasonably run graphics, 10 GbE, and one NVMe device off your 3.0x16 PEG lanes.
And AMD also provides an extra 3.0x4 for NVMe devices, so you can run graphics, 10 GbE, and NVMe RAID without touching the PCH at all.
The real use-case that I see is SuperCarrier-style motherboards that have PEX/PLX switches and shitloads of x16 slots multiplexed into a few fast physical lanes, like a 7-slot board or something. Or NVMe RAID/JBOD cards that put 4 NVMe drives onto a single slot. But right now there are no PEX/PLX switch chips that run at PCIe 4.0 speeds anyway, so you can't do that.
Sure but you won't find any board with a setup like that. You can also reasonably split the x4 nvme lanes into 2x x2 but again you won't find a such a setup.
You'll find no shortage of boards with everything wired up to the PCH, though, and it's "good enough" even if it isn't ideal. The extra bandwidth will certainly not be unwanted. Especially when you're also sharing that bandwidth with USB and sata connections.
> The real use-case that I see is SuperCarrier-style motherboards that have PEX/PLX switches and shitloads of x16 slots multiplexed into a few fast physical lanes, like a 7-slot board or something.
I think those use cases would instead just use threadripper or epyc. Epyc in particular with its borderline stupid 128 lanes off of the CPU.
(I'm fairly certain for most gaming workloads, the bandwidth increase will only come into play when getting closer to 4k 144Hz, which is unlikely to be pushed out by first gen PCIe 4.0 GPUs.)