Most CPUs don't have the lanes for this anyways... after you install a video card and an NVME drive, you're lucky if you have 8 lanes to spare, let alone 16...
Bifurcation lets you allocate the lanes in more configurations, but doesn’t actually give you more lanes or allow you to dynamically allocate a lane. Unless you’re connecting more devices than are planned for by the cpu and chipset, you’re not going to see an advantage.
The problem is the physical slot waste. If you have x16 slot and you put an x16 GPU card into it (like majority of people do), then, there is no waste whatsoever. But, if you put an x8 GPU or some other x4 or x1 card, then, there is a lot of PCIe lanes wasted, simply because you can't physically access them anymore. When your MBD can bifurcate, for example x8+x8, then, a £10 PCIe dongle can split that one x16 PCIe slot into two x8 PCIe slots.
And if you ever have 1-4 spare slots, you could always fit another SSD or USB ports in there. Also keeping in mind that PCIe being package-switched, you can actually have a good time connecting more lanes than the CPU s set up for as long as you're not going full throttle on all cylinders and the chipset can cope :)
Right, but if there's 8 lanes to spare, and I could use 4 for a 10GbE NIC and 4 for another NVMe drive, that's better than wasting all 8 on one of those functions and not being able to install the other at all.
Isn't 4 lanes for a 10 GbE NIC a complete overkill? Two lanes should be plenty. One would do, assuming your combined full duplex bandwidth doesn't exceed 16 Gbps (minus overhead).
For PCIe4, 4 lanes corresponds to 64 Gbps of bandwidth.
and yet, cheap $20 CPUs do have enough. My GPU rig runs dual e5-2680 $20 each. For $40 I have a total of 88 lanes. $160 dual server motherboard from Aliepxress. I currently have 6 GPUs. The motherboard supports PCIe bifurcation, so I don't really need this article. Just additional hardware to split the physical slots.