> If only we had connectors which could actually handle such currents.
The problem isn't connectors, the problem (fundamentally) is to share electric connectivity between multiple conductors.
Sure, you can run 16A over 1.5 mm² wires, and 32A over 2.5 mm² (taken from [1], yes it's for 230V but that doesn't matter, the current is important not the voltage). And theoretically you could run 32A over 2x 1.5 mm² (you'd end up with 3 mm² cross section), but it's not allowed by code as when, for any reason, either of the two legs disconnects entirely or has increased resistance e.g. due to corrosion or a loose screw / wire nut (hence, please always use Wago style clamps - screws and wire nuts are not safe, even if torqued properly which most people don't), suddenly the other leg has to carry (much) more current than it's designed for and you risk anything from molten connectors to an outright fire. And that is what NVidia currently is running into, together with bad connections (e.g. due to dirt ingress).
The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.
Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.
Another and even more annoying issue by the way is grounding - because all the electricity that comes in also wants to go back to the PSU and it can take any number of paths - the PCIe connector, the metal backplate, the 12VHPWR extra connector, via the shield of a DP cable that goes to a Thunderbolt adapter card's video input to that card, via the SLI connector to the other GPU and its ground...
> The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.
> Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.
So as you say, monitoring multiple inputs happened on the older xx90s, and most cards still do it. It's not hard.
Multiple DC-DC converters is something every GPU has. That's the only way to get enough current. So all you have to do is connect them to specific pins.
The problem isn't connectors, the problem (fundamentally) is to share electric connectivity between multiple conductors.
Sure, you can run 16A over 1.5 mm² wires, and 32A over 2.5 mm² (taken from [1], yes it's for 230V but that doesn't matter, the current is important not the voltage). And theoretically you could run 32A over 2x 1.5 mm² (you'd end up with 3 mm² cross section), but it's not allowed by code as when, for any reason, either of the two legs disconnects entirely or has increased resistance e.g. due to corrosion or a loose screw / wire nut (hence, please always use Wago style clamps - screws and wire nuts are not safe, even if torqued properly which most people don't), suddenly the other leg has to carry (much) more current than it's designed for and you risk anything from molten connectors to an outright fire. And that is what NVidia currently is running into, together with bad connections (e.g. due to dirt ingress).
The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.
Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.
Another and even more annoying issue by the way is grounding - because all the electricity that comes in also wants to go back to the PSU and it can take any number of paths - the PCIe connector, the metal backplate, the 12VHPWR extra connector, via the shield of a DP cable that goes to a Thunderbolt adapter card's video input to that card, via the SLI connector to the other GPU and its ground...
Electricity is fun!
[1] https://stex24.com/de/ratgeber/strombelastbarkeit