Hacker News new | past | comments | ask | show | jobs | submit login
First credible report of RTX 5090 FE with melted connector appears (tomshardware.com)
34 points by LorenDB 4 days ago | hide | past | favorite | 19 comments





TGP for 5090 is 575W. PCIe slot supplies 75W, so it's 500W through the connector 500W / 12V = 41.7A, or 52A with 80% derating rule. Conductors should be 10AWG or thicker.

What are go-to connectors at these currents?


You'd usually have multiple wires and split the current up. Also, if you design carefully you can let your wires get hotter than what would be recommended for in-wall wiring (what the calculators usually figure for).

If you know you'll have airflow, and the insulation is rated better, you could run them hot.


Well, obviously you can't run them so hot ...

For PC components, molex mini-fits. Not sure exactly which product lines the 5090 specced, but you can get them with 10A+ per circuit.

But its cousin Micro-Fit is what melted, and it's been happening for a while, so infinitely parallelizing these sounds a bit sketchy to me.

The resistance thermal coefficient of most¹ materials is positive, which is why parallelizing connectors works; contact points that heat up due to poor contact increase in resistance and therefore current shifts to other connections that have less resistance.

This is a plain case of having not enough contact area (or not having enough mechanical safety/UX to ensure full insertion.)

¹ most: pretty much everything that isn't explicitly researched/chosen for its negative coefficient. Definitely all conductor and connector materials. But some semiconductors are known footguns for not supporting parallelization due to negative coefficients and runaway heat effects (one of the things you learn in EE education).

P.S.: materials with negative thermal coefficient don't turn into superconductors if you heat them far enough; the coefficient is not constant across temperature and it's either just not linear and approaches some nonzero value, or it just flips to a positive coefficient at some higher temperature. Or the material just melts or burns and then behaves differently anyway.


I’m guessing that at some point in time, these graphics cards are going to have to fit a temperature sensor next to that connector, to safely shut the card off if the connector is overheating. A wire of a given thickness will easily carry the calculated current so wires generally don’t need sensors. Connectors are more complicated. Dust, crumbs, or oxidation can significantly increase resistance, causing rapid heating, so a temperature sensor there would actually make sense. Actually, given the past history of these connectors melting, I’m surprised NVIDIA didn’t do this already.

Yes but that would increase the BoM by several cents on these $1k+ GPUs so it's not likely to happen.

Maybe just go 24V (or even higher, it's not like the GPU connector sees a lot of switching so contact arching is not a problem). Yes, that's a new PSU, but at that rate it's going to happen anyway.

Yeah, more likely that'd be 48-55V, much larger space of existing solutions for that (Telco DC power, PoE and LED lighting) and likely also more efficient.

Ultimately yes, but that would imply changing designs a lot more. I. e. CSD95372BQ5M (TI's VRM switch) doesn't go beyond 24V. Most PMICs/controllers are designed to interface those low-voltage switches so those will need to be redesigned. That would involve the board vendor, PMIC vendor and processor vendor (Nvidia/AMD/Intel). 24V might or might not be available to the board vendors right now by changing some caps and inductors.

I don't think a change to 24V is doable like that, even if the parts theoretically support it - it'll be at a different efficiency point (= also thermal issue) for one, but also the regulator feedback loops might become unstable without reengineering.

I will paraphrase a comment on engineers, this time for electrical engineering "any idiot can build a power cable, but it takes an electrical engineer to barely make a power cable" original concerns bridges... edit: I should add that it is both a critical and complimentary remark, depending on the outcomes.

Very interesting that both ends failed at the same time. That tells me that there was a manufacturing flaw at both ends (IE bad crimp), the connectors were made from too thin of material (again, a manufacturing flaw), or the GPU drew more current than what the connector was designed to carry.

I think this can happen easily if some of the wires become unconnected due to the melting of the first pin, resulting in more current in the remaining pins, that the remaining connector cannot handle. Likewise, a short on the far connector could do something similar.

The problem with that is that to measure per-strand current you need to isolate at least one strand and have a current measurement shunt in it, and then either that strand will carry less current (due to the extra shunt resistance), or you do it for all strands and then all of them have worse characteristics due to the shunts. And any tolerance between shunt values would result in an unbalanced per-strand current…

So, yes, it's doable, but there's a thermal/efficiency/reliability cost.

Generally you'd just design enough margin into it so that a failing strand doesn't matter. I'm suspecting they just didn't add enough margin, maybe it's only designed to support one of the six strands failing (I would really expect at least a tolerance of 2 failed.)

Or it's really a poor design regarding the mechanical latching; can't work if all pins have poor connection…


I'm always wary of any power solution that splits current over multiple parallel wires without some kind of way to limit or detect over-current in the remaining strands when one faults.

nVidia's stuff can't do this any more by design, unlike older cards.

This makes it sound like it's doing it anyway: https://www.tomshardware.com/pc-components/gpus/rtx-5090-cab...



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: