Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia's RTX 5090 power connectors are melting (theverge.com)
323 points by ambigious7777 20 hours ago | hide | past | favorite | 344 comments





12vhpwr has almost no safety margin. Any minor problem with it rapidly becomes major. 600W is scary, with reports of 800W spikes.

12V2x6 is particularly problematic because any imbalance, such as a bad connection of a single pin, will quickly push things over spec. For example, at 600W, 8.3A are carried on each pin in the connector. Molex Micro-Fit 3.0 connectors are typically rated to 8.5A -- That's almost no margin. If a single connection is bad, current per connector goes to 10A and we are over spec. And this if things are mated correctly. 8.5A-10A over a partially mated pin will rapidly heat up to the point of melting solder. Hell, the 16 gauge wire typically used is pushing it for 12V/8.5A/100W -- that's rated to 10A. Really would like to see more safety margin with 14 gauge wire.

In short, 12V2x6 has very little safety margin. Treat it with respect if you care for your hardware.


This can't be Micro-Fit 3.0, those are only sized to accept up to 18AWG. At least, with hand crimp tooling, and that's dicey enough that I'd be amazed if Molex allowed anything larger any other way. The hand crimper for 18AWG is separate from the other tools in the series, very expensive, and a little bit quirky. Even 18AWG is pushing it with these terminals.

This has to be some other series.


Great summary. Buildzoid over on YouTube came to a similar conclusion back during the 4xxx series issues[1], and looks like he's released a similar video today[2]. It's worth a watch as he gets well into the electrical side of things.

It's been interesting to think that we're probably been dealing with poor connections on the older Molex connectors for years, but because of the ample margins, it was never an issue. Now with the high power spec, the underlying issues with the connectors in general are a problem. While use of sense pins sorta helps, I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink. That will make connectors more expensive no doubt, but much of the ATX spec and surrounding ecosystem was never designed for "expansion" cards pushing 600-800w.

[1] - 12VHPWR failures (2023) https://youtu.be/yvSetyi9vj8?t=1479 [2] - Current issues: https://www.youtube.com/watch?v=kb5YzMoVQyw


> I think the overall mechanism used to make an electrical connection - which hasn't changed much in 30+ years - is probably due for a complete rethink.

There are tons of high-power connectors out there, and they look and work pretty much the same as the current ones (to the untrained eye). They are just more expensive.

Though at 40A+ you tend to see more "banana" type connectors, with a cylindrical piece that has slits cut in it to deform. Those can handle tons of current.


This is the most informative assessment in this thread.

You'd expect to see the capacity to be 125% as is common in other electrical systems.

Ratings for connectors and conductors comes with a temperature spec as well, indicating the intended operating temperature at a load. I'm sure, with this spec being near the limit of the components already, that the operating temperatures near full load are not far from the limit, either.

Couple that with materials that may not have even met that spec from the manufacturer and this is what you get. Cheaper ABS plastic on the molex instead of Nylon, PVC insulation on the wire instead of silicone, and you just know the amount of metal in the pins is the bare minimum, too.


"3rD party connectors" is being waved around by armchair critics. The connectors on the receiving end of all of this aren't some cheap knock-off, they are from a reputable manufacturer and probably exceed the baseline.

The cable must necessarily be third-party from the perspective of the GPU or from the perspective of the power supply.

If it's built to the expected tolerance, it should work.


Show me the Molex logo molded on that connector end and I'll believe you.

It's all off-brand slop from companies that learned marketing by emulating western brands. Kids on the internet lap up circuitous threads about one brand being better than the other based on volume.

"MODDIY" is a reputable brand? C'mon.


No sane individual is going to buy 5090 for $2000-100000 and hook it up to a $15 Power supply.

Correct, but it turns out, a fair fraction of the people who day 1 purchase a GPU for that much may not necessarily be described as wholly sane.

your comment makes me wonder...

Is it a computer science type person, who might be unaware of electrical engineering...

Or someone who takes a perfectly good car and adds ridiculous rims that rub?


They aren't insane, just not thrifty with their money.

They're not necessarily insane, I agree.

But I've also known some people who did that when they did not have the disposable income to do it.


> Any imbalance

I watched de-Bauer's analysis this morning, and you've seemingly hit the nail on the head. Even on his test bench it looks like only two of the wires are carrying all of the power (instead of all of them, I think 4 would be nominal?) - using a thermal camera as a measuring tool. The melted specimen also has a melted wire.

Maybe 24V or 48V should be considered, and higher gauge wires - yes.


It would be _lovely_ if instead of the 12V only spec we went to 48V for internal distribution. Though that would require an ecosystem shift. USB-PD 2.0~3.0 would also be better supported https://en.wikipedia.org/wiki/USB_hardware#USB_Power_Deliver...

As others no doubt mention Power (loss, Watts) = I (amsp) * V (volts (delta~change on the wire)).

dV = I*R ==> dV = I * I / R -- That is, other things being equal, amps squared is the dominant factor in how much power loss occurs over a cable. In the low voltage realms most insulators are effectively the same and there's very little change in resistance relative to the voltages involved, so it's close enough to ignore.

600W @ 12V? 50A ==> 1200 * R while at 48V ~12.5A ==> 156.25 * R

A 48V system would have only ~13% the resistive losses over the cables (more importantly, at the connections!); though offhand I've heard DC to DC converters are more efficient in the range of a 1/10th step-down. I'm unsure if ~1/25th would incur more losses there, nor how well common PC PCB processes handle 48V layers.

https://en.wikipedia.org/wiki/Low_voltage#United_States

""" In electrical power distribution, the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as up to 49 V.

The NFPA standard 79 article 6.4.1.1[4] defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V DC ripple-free for dry locations, and 6 Vrms or 15 V DC in all other cases.

Standard NFPA 70E, Article 130, 2021 Edition,[5] omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established.

UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits. """

The UK is similar, and the English Wikipedia article doesn't cite any other country's codes, though the International standard generally talks at the power grid distribution level.


> A 48V system would have only ~13% the resistive losses over the cables (more importantly, at the connections!)

It's one-sixteenth (6.25%) actually. You correctly note that resistive losses scale with the square of the current (and current goes with reciprocal voltage), so at 4 times the voltage, you have 1/4th the current and (1/4)^2 = 1/16th the resistive losses.


I've been beating the 48v drum for years. Any inefficiency in the 48-to-1 conversion should be mostly offset by higher efficiency in the 240-or-120-to-48 conversion, I suspect it's a wash.

Every PoE device handles 48 without issue on normal PCB processes, so I don't expect that to be a big deal either. They _also_ have a big gap for galvanic isolation but that wouldn't be necessary here.


48v is basically already the standard for hyperscalers with power shelf rack designs.

> I think 4 would be nominal?

6 or 12, depending on how you count. There are 6 12V supply wires, and 6 GND return wires. All of them should be carrying roughly the same current - just with the GND wires in the opposite direction from the 12V ones.


Yes, and by the way I also think typical GPU cables are way too stiff for such a small and fragile connector.

I would love to get some insight from Nvidia engineers on what happened here.

The 3 series and before were overbuilt in their power delivery system. How did we get from that to the 1 shunt resistor, incredibly dangerous fire-hazard design of the 5 series? [1] Even after the obvious problems with 4 series, Nvidia actually doubled down and made it _worse_!

The level of incompetence here is actually astounding to me. Nvidia has some of the top, most well paid EE people in the industry. How the heck did this happen?

1: https://www.youtube.com/watch?v=kb5YzMoVQyw


Maybe having cards drawing >500/600 watts is just a bad idea.

Add in a CPU and such and we’re quickly approaching the maximum amount of power that can be drawn from a standard 12 amp circuit continuously.


Not sure if this is related but a friend there said a lot of people retired

Derbauer and Buildzoid on YouTube made nice informative videos on subject, and no it is not "simply a user error". So glad I went with 7900 XTX - should be all set for a couple of years.

Summary of the Buildzoid video courtesy of redditors in r/hardware:

> TL;DW: The 3090 had 3 shunt resistors set up in a way which distributed the power load evenly among the 6 power-bearing conductors. That's why there were no reports of melted 3090s. The 4090/5090 modified the engineering for whatever reason, perhaps to save on manufacturing costs, and the shunt resistors no longer distribute the power load. Therefore, it's possible for 1 conductor to bear way more power than the rest, and that's how it melts.

> The only reason why the problem was considered "fixed" (not really, it wasn't) on the 4090 is that apparently in order to skew the load so much to generate enough heat to melt the connector you'd need for the plug to not be properly seated. However with 600W, as seen on der8auer video, all it takes is one single cable or two making a bit better contact than the rest to take up all the load and, as measured by him, reach 23A.

https://old.reddit.com/r/hardware/comments/1imyzgq/how_nvidi...


- Most 12VHPWR connectors are rated to 9.5-10A per pin. 600W / 12V / 6 pin pairs = 8.33A. Spec requires 10% safety factor - 9.17A.

- 12VHPWR connectors are compatible with 18ga or at best 16ga cables. For 90C rated single core copper wires I've seen max allowed amperages of at most 14A for 18ga and 18A for 16ga. Less in most sources.. Near the connectors those wires are so close they can't be considered single core for purpose of heat dissipation..


Honestly with 50A of current we should be using connectors that screw into place firmly and have a single wipe or a single solid conductor pin style. Multi-pin connectors will always inherently have issues with imbalance of power delivery. With extremely slim engineering margins this is basically asking for disaster. I stand by what I've said elsewhere: If I was an insurance company I'd issue a notice that fires caused by this connector will not be covered by any issued policy as it does not satisfy reasonable engineering margins.

edit: replaced power with current... we're talking amps not watts


Derbauer: https://www.youtube.com/watch?v=Ndmoi1s0ZaY

Buildzoid: https://www.youtube.com/watch?v=kb5YzMoVQyw

I went 7900 GRE, not even considering Nvidia, because I simply do not trust that connector.


> So glad I went with 7900 XTX - should be all set for a couple of years.

Really depends on the use case. For gaming, normal office, smaller AI/ML or video-work, yeah, it's fine. But if you want the RTX 5090 for the VRAM, then the 24GB of the 7900 XTX won't be enough.


Honestly, the smart play in that case is to buy 2 3090's and connect them with nvlink. Or...and hear me out, at this point you could probably just invest your workstation build budget and use the dividends to pay for runpod instances when you actually want to spin up and do things.

I'm sure there are some use cases for 32gb of vram but most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram. It's been a while since I've seen anything bigger than 24gb but less than 70gb.


> most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram.

I'm not sure what your idea of "day to day" use cases are, but models that fit in 12GB of VRAM tend to be good for like autocomplete and not much more. I can't even get those models to chose the right tool at the right time, even less be moderately useful. Qwen2.5-32B seems to be the lower boundary of a useful local model, it'll at least use tools correctly. But then for "novel" (for me) coding, basically anything below O1 is more counter-productive than productive.


Yes I was gonna mention that Qwen model from the deepseek folks as maybe an exception

Enjoying my 7900XTX as well. I really don't understand why nvidia had to pivot to this obscure power connector. It's not like this is a mobile device where that interface is very important - you plug the card in once and forget about it.

Yeah, I expect my next card with be AMD. I'm happy with my 3080 for now, but the cards have nearly double in price in two generations and I'm not going to support that. I can't abide the prices nor the insane power draw. I'm OK with not having DLSS.

I agree with the prices, but you say you have a 3080, which has a higher max power draw than the 5080 or 4080. Power requirements actually went down

It's true, but to be fair, by the hardware the 5080 is really more of a 70 series cared in previous gens. I was just thinking of the insane top end of the 90 series.

It'll probably be fine for years, longer if you can stand looking at AI generated, upscaled frames. Liftup in GPU power is so expensive, we might as well be back to the reign of the 1080. The only thing that'll move the needle will be a new console generation.

And 1080 is totally fine at 1920x1080 60fps even with recent games (I see my son enjoying Elden Ring, Helldivers 2 etc with that setup).

My 1080 has been running with the same configuration for years. The only thing I consider a downside is the lack of power for exploring AI locally, and AI isn't worth buying a $1234 video card for myself.

I had two AMD 7900 XTX and they both overheated like mad. Instant 110°C and throttle. Opted for a refund instead of a third GPU.

Probably reference cards, yeah? I think the common advice is to not buy the reference cards. They rarely cool enough. I made that mistake with the RX 5700 XT, and will never again.

My 5700XT overheated constantly. It wasn't a reference card. The problem was that the default fan curve in the driver maxed out at 25%!

AMD is REALLY bad at software.


Yeah, they were reference, direct from AMD's own store.

Those had an issue with their heatpipe design which affected cooling performance depending on their orientation. I made sure to buy an AIB model that didn't suffer the same issue, just in case I want to put the card somewhere funky like a server rack.

https://www.tomshardware.com/news/defective-vapor-chamber-ma...


It's too bad AMD will stop even aiming for that market. But also, I bought a Sapphire 7900 XTX knowing it'd be in my machine for at least half a decade.

There is a real problem with the connector design somewhere: der8auer tested with his own RTX 5090FE and saw two of the cable strands reach concerning temperatures (>150ºC).

Video timestamp: https://youtu.be/Ndmoi1s0ZaY?t=883


I've tested my own 5090FE the same way he did(Furmark for at least 5 minutes at max load, but I actually ran it for 30 minutes just to be mega sure) and with an infrared thermometer the connector is at 45C on the GPU side and 32C on the PSU side. I have no idea what's happening in his video but something is massively wrong, and I don't understand why he didn't just test it with another PSU/cable.

More peeking and prodding would definitely have been welcome, but it's still a useful demonstration that the issues around the card's power balancing are not theoretical, and can definitely be the culprit behind the reports.

Are you using the splitter that nvidia provided or a 600w cable? Also, what PSU?

I've been using mine remotely, so trying to figure out how much I should panic. I'm running off the SF1000 and the cable it came with. Will be a few weeks before I can measure temperatures.


The new Corsair RM1000x(ATX 3.1 model), with the included 12V-2x6 cable(so just one connector at the PSU and one at the GPU, no adapter).

Good to know. I guess I'll just hold out hope that things are ok and avoid heavy work loads until I can measure things properly.

Is NVIDIA breaching any consumer safety laws by pumping twice the rated current through a 24ish gauge wire? Perhaps by violating their UL certification?

Aren't 12VHPWR cables like https://www.overclockers.co.uk/seasonic-pcie-5.0-12vhpwr-psu... 16AWG ?

Sure, there are problems with the connector. But 600W split over a 12-pin connector is 8.3A per wire, and a 16AWG / 1.5mm² wire should handle that no problem.


You're correct about 16AWG however "But 600W split over a 12-pin connector is 8.3A per wire" is only what _should_ ideally be occurring, not what Roman aka Der8auer _observed_ to occur. Even with his own 5090, cable, PSU, and test setup:

> Roman witnessed a hotspot of almost 130 degrees Celsius, spiking to over 150 degrees Celsius after just four minutes. With the help of a current clamp, one 12V wire was carrying over 22 Amperes of current, equivalent to 264W of power.


Made some 12v-2x6 custom cables for fun and 99% sure the melting problems are from the microfit female connectors themselves. A lot of resistance going through the neck

CE marking in Europe could be an issue. There's potential for a fine or forced recall.

UL is a private company. There're no laws requiring it or penalizing violations. I would think the only legal consequences would be through civil product liability/breach of warranty claims. Plus, like, losing the certification would mean most stores would no longer stock it.

Many products sold in the United States must be tested in a CPSC-certified lab for conformity, of which UL is the best known. But consumer electronics don’t seem to be among that set, unless they are roped in somehow (maybe for hazardous substances?).

Seems like if you filled your house with non-UL compliant stuff and your house burned down, the first fact would be material to your insurance carrier (you know, the Underwiter to which the UL name refers…)

You might want to do some research on what you can buy and legally plug into your own home. It's more or less you get UL listing, or the product isn't available


For how much longer will that .gov website be operating?

This same thing happening on the 40 series cards was good enough vindication for me not 'upgrading' to that at the time. I'd rather not burn my house down with my beloved inside.

Can't believe the same is happening again.


I can relate to this perspective! It's important to step out of the system and recognize priorities like this; balancing risk and reward.

I think, in this particular case, perhaps the risk is not as high as you state? You could imagine a scenario where these connectors could lead to a fire, but I think this is probably a low risk compared to operation of other high-power household appliances.

Bad connector design that's easy to over-current? Yes. Notable house-fire risk? Probably not.


Step the GPU voltage up to 48V. (anyway you make a new connector that's not compatible with existing PSUs. Why not actually fix a problem at the same time, once and for all! [48V should be enough for anybody, right?])

Not a bad idea IMHO. There are already computers (servers mostly, but also integrated models) who only have 12V power connections and the mainboard does the step-down voltage conversion, and IIRC some companies wanted to do the same to regular desktops.

I would be totally happy if the next gen of computers have 12V outputs to the mainboard and CPU and 48V to the GPU and other power-hungry components. This would make the PCB of those cards a bit bigger, but also would have less power losses and less risk of overheated connectors on the other hand.


> Step the GPU voltage up to 48V.

Meh. Might as well ask for its own AC cable and be done with it.


But I want fewer cables, not more.

Until the US changes their AC power connectors, we just don’t have a use case for it frankly. When the entire system is going to always top out at 1200W or so (so you have an extra few hundred watts for monitors and such), we’re pretty limited to maximum amperage.

The USA has 240 volt plugs. They are only used for high power appliances such as AC or ovens. If you want, you could add a plug for your high powered space heater AKA gaming PC.

Yes, but most people don't want to pay the thousands of dollars to get an electrician to do a rewire.

The problem there would be your breaker. I am not an electrition but I can tell you that when I tried adding a heated MAU to my house, I had to switch to a 120v washer/dryer because my electric panel did not have space for another 208v line.

(Note, my building is actually 3 phase 208 volt not 240volt so I don't have 240 volt plugs but 208volt plugs)


I’m aware we have 240V outlets. They are just not used in a place where you would put a PC. Until there is a shift in need (I.e., every normal user would need more than a 120V plug could handle), you won’t ever see 240V outlets in offices. I suspect it will never happen.

In server areas and extremely specialized stuff? Yea, sure. But we’re talking desktop PCs here.


There's also 20 amp circuits which are common.

Many houses run circuits that are rated for 20 amps even if they don't have the right outlet for it so this is an inexpensive upgrade for most.


I did not realize the outlet impacts the amperage… Is it a rating issue, or is there an actual part there that will trip?

It’s the whole chain - 20a outlets typically require 12ga wire instead of 14ga, a 20 amp breaker, and yes - the outlet is different. The 20a outlets add a horizontal opening to one of the 2 vertical slots, making a sideways T shape. Devices that require 20 amps will have one of those horizontal prongs to ensure you don’t plug them in to a 15 amp outlet.

The outlet itself doesn't care, but the shape of the receptacle is supposed to restrict insertion of a 20 amp device into a 15 amp socket. You can stick a 15 amp device into a 20 amp socket, but not vice versa. The electrician should be installing 20 amp sockets if the cabling can support it, but many don't.

It's the difference between NEMA 5-15 and 5-20: https://en.wikipedia.org/wiki/NEMA_connector#NEMA_5


> The electrician should be installing 20 amp sockets if the cabling can support it, but many don't.

I think this is mainly because the 20 amp outlets are kind of ugly, and the fact that barely anything actually uses a 20 amp plug.

In my house, almost every circuit has a 20 amp breaker and 12 ga (yellow) romex, but only a couple of outlets are 5-20.


NEMA 5-20 is only required for commercial. You can use NEMA 5-15 on a 20A circuit for residential in the US.

When I said "should", I didn't mean code required it, but that slapping a 15 amp cover over a 20 amp capable circuit is kindof stupid.

The shape of the outlet is different for different current allowances (the spades are wider or rotated). It is supposed to allow an electrician to indicate that the whole circuit is rated to handle the higher expected load, and that there aren’t other outlets on the same circuit which might also try to use the whole current available. Basically a UI problem trying to encourage robust designs for use by non-experts

I envy my European friends' 240v electric kettles

British kettles draw so much power, the electric utility had to consider the additional power draw on the grid from synchronized tea-making during the commercial breaks of a popular soap opera, back when broadcast TV was king.

And to drive that point home, we get induction stoves that run on three-phase 400V.

D':

> I envy my European friends' 240v electric kettles

... do you not have electric kettles in the US? Foolishly, I thought this was a standard kitchen appliance all over the world, I've even seen it in smaller cities in Peru.


We do, but they’re limited to much lower wattage due to the outlet limits. A typical US kettle is 1100-1400W, and takes maybe 1-2 minutes to boil. Kettles in the UK are typically 2.5-3kW.


And they're probably not real. Take look at any of the clones of the dyson hair dryer and check their proclaimed RPMs, many of them would have the tip of the fan blade spinning at several mach if they actually hit their limit.

There's aquarium heaters on amazon that say they're 10kw or more and plug into a 120 outlet.

I bought a magnet that is supposed to hold "150 pounds", but pulls off the ceiling (in it's strongest position) with just 10-15 pounds.

Amazon specs are fake.


We also have 20 amp wiring, 20 amp breakers, 20 amp sockets, and plugs too. A lot easier than going 240 volt. That will give you 2400 watts max.

Most residential wiring is 15A except for bathrooms.

And kitchens.

Otherwise, running your microwave, toaster, and coffee maker at the same time would likely trip the breaker.

And obviously, the stove/oven is on its own circuit unless it's gas.


My kitchen has at least 2 120VAC circuits, which seems to avoid this.

Don’t forget garages, too.

Now I must use lots of rather thick cables in my desktop (because I run GPUs).

Imagine that the GPU would instead suck up all the power it needs through the PCIe connector, without all those pesky cables. (right now PCIe can provied 75W at 12V, i.e. 6.25A; that same current would provide 300W at 48V).


The PCIe slot would not be sufficient even if the power architecture moved to 48V: the 12VHPWR are getting 600W pushed through them.

I pulled a fresh 20A (120V) circuit just for my 5090 build.

What power supply do you have that even has a 20A inlet? 20 amp breakers are common for outlets (especially in newer builds) but the outlets are still 15A outlets. And there is essentially no desktop power supply that exists that would exceed a 15A outlet currently.

> What power supply do you have that even has a 20A inlet?

ATX PSUs usually have IEC 60320 C14 inlets. The IEC 60320 standard itself states that this inlet is only good for up to 10 Amps.

UL is happy to ignore them and say that 15 Amps is okay. It wouldn't surprise me if someone else were happy to ignore that and say that 20 Amps is okay.

Even still, swapping a C14 inlet for a C20 inlet (IEC max 16 Amps, UL max 20 Amps) would be a relatively easy thing to do (EDIT: on a PSU that is already designed to take more than 15 Amps, obviously). Probably a warranty-voiding action though.


https://www.amazon.com/dp/B09PJYMK77/zcF9kZXRhaWxfdGhlbWF0aW...

I'm sure there are power supplies for servers that go above 1600 watts too. If you really want to, you can ... but you really shouldn't.


There are plastics that can deal with high temperatures.[1] They're heavily used in automotive applications. They're not often seen inside computers.

Still, 50 amps inside a consumer computer is excessive. At some point it's time to go to a higher voltage and get the current down.

[1] https://www.plastopialtd.com/high-temperature-plastics/


At this point, I'm waiting for the first RTX generation that just comes with its own separate PSU and wall plug cable.

People are hooking air con units to liquid GPU coolers

https://www.tomshardware.com/pc-components/liquid-cooling/rt...


For what it's worth, the "air conditioner" is just a giant radiator. From the linked bilibili video it's clear (at 1:20 mark) that they've repurposed the radiator, fan, and case from an air conditioner but there is no compressor.

I've read about this exact practice in one form or another since the Pentium 3 (sometimes just directly cooling with R134).

Someone remind me again why GPUs need 600 watts? I never liked the concept of having to plug a power cable into a GPU, but these new connectors are just terrible...

> Someone remind me again why GPUs need 600 watts?

Imagine a GPU that uses fewer watts. Now imagine someone in charge of the high end models makes it bigger until they hit a limit.

That limit is generally either chip size or power, and chips have gotten so dense that it's usually not chip size.

That's why the very top GPU uses as much power as you can get it.


Because it's a competitive industry, and further efficiency gains are either not available ("can't do" option) or were deemed strategically unsound to roll out for now ("won't do" option), possibly both. It's an active dimension of sprawl.

The GPUs require additional power, as the PCI-e slot they're connected to can only carry so much.

Obviosly there are GPUs without aux power connectors, but they're considered low-tier.


Power overwhelming! ;)

You only paid $2000 for it, what did you expect.

Ha! Find me where I can buy it for $2000.

Yeah, how do people have these in their hands already? Everywhere I look is sold out.

It's sold out to the people who have these in their hands already.

it's actually $5090 where I live.

der8auer got his hands on the actual card, cable and PSU: https://www.youtube.com/watch?v=Ndmoi1s0ZaY (I'm assuming the content is identical to the German https://www.youtube.com/watch?v=puQ3ayJKWds - I haven't watched the English one)

Notable is that on the PSU connector side, 5 pins show heat damage. That means at minimum those 5 must have been carrying some current; i.e. only one of the 6 connections could have failed completely open.

On the PSU one of the ground pins was melted into the PSU connector, this should allow verifying if the plug was fully inserted by a lab disassembling and cross-sectioning it.


Bring back screw in connectors.

The extended title and subtitle say a lot.

> -- Uneven current distribution likely the culprit

> One wire was spotted carrying 22A, more than double the max spec.


This shit is so fucking dumb. Sorry for the unhinged rant, but it's ridiculous how bad every single connector involved with building a PC is in 2025.

I'm just a software guy, so maybe some hardware engineer can chime in (and I'd love to find out exactly what I'm missing and why it might be harder than it seems), but why on earth can everything not just be easily accessible and click nicely into place?

I'm paying multiple hundred dollars for most of these parts, and multiple thousands for some now that GPUs just get more and more expensive by the year, and the connector quality just gets worse and worse. How much more per unit can proper connectors possibly cost?

I still have to sit there stressing out because I have no idea if the PSU<->Mobo power connector is seated properly, I have no idea if the GPU 12VHPWR cable is seated properly, I'm tearing skin off my fingers trying to get the PSU side of the power cables in because they're all seated so closely together, have a microscopic amount of plastic to grip onto making it impossible to get any leverage, and need so much force to seat properly, again with no fucking click. I have no idea if any of the front panel pins are seated properly, I can't even reach half of them even in a full ATX case, fuck me if I want anything smaller, and no matter what order you assemble everything in, something is going to block off access to something else.

I'm sure if you work in a PC shop and deal with this 15 times a day you'll have strategies for dealing with it all, but most of us build a PC once every 3 years if that. It feels like as an average user you have zero chance to build any intuition about how any of it works, and it's infuriating that the hardware engineers seem to put in fuck all effort to help their customers assemble their expensive parts without breaking them, or in this case, having them catch fire because something is off by a millimetre.

This space feels ripe for a radical re-design.


I know we're just ranting, and there are reasons for the seemingly bad designs. But I have a very recent 1200W Corsair (ATX 3.1/PCIe 5.1) which uses these special "type 5" mini connectors on the PSU side. It's painful to try and get your fingers between them to unclip a cable, and yesterday two of the clips broke off just trying to remove them. I ended up taking the whole PSU out just to make sure I didn't lose plastic clips into the PSU itself. It's fine now, but two of my cables will never latch again. Just, blah.

My first build used a Kingwin PSU from around 2007 which used "aircraft style" round connectors which easily plugged in then screwed down. It even had a ring of blue LEDs around the connectors. It was so cool and felt premium! Having that experience to compare to made the Corsair feel cheap despite being so much more powerful.


Connectors are actually extremely difficult to make.

- you have to ensure that the metal connectors take shape and bond to the wire properly. This is done by crimping. Look up how much a good crimping tool costs for a rough approximation of how difficult it can be to get this right.

- one plastic bit has to mate with another plastic bit, mechanically. This needs to be easy enough for 99.99% of users to do easily, yet it needs to be 99.99% reliable, so that the two bits will not become separated, even partially. Even under thermal expansion.

- the electrical contacts inside must be mechanically mated over a large surface area so that current can pass from one connector to another.

- it must be intuitive for people to use. Ideally user pushes it and it clicks right in. No weird angles either, it could be behind a mechanical component that's tough to reach. Also, user has to be able to un-mate the connector from the same position. It should be tough for a user to accidentally plug in an ill suited connector into the wrong slot.

- has to cost peanuts. Nobody will pay $3 for a connector. Nobody will even want to pay $1 for a connector. BOM cost is 15-20% finished goods cost. Will the end user pay $8, $10, $12 for a good connector? No.

- repeatable to manufacture (on the board and on the cable) at high quality. User might take apart their PC a dozen times, to fix things, clean, etc for the lifetime of the component. So the quality bar is actually very high. Nothing can come loose or break off, not even internal parts.

- physically compact. PCB space is at an extreme premium.

- your connector design has to live across many product cycles, since people are going to be connecting old parts to new boards and they'll be upset if they can't do this. So this increases risk by a lot as redesigning a connector means breaking compatibility for existing users.

Connectors are actually a very very deep and interesting well.

I'm not surprised at all that they are running into issues here, these cards are pulling 500+ watts. That is a LOT of current.

I think next gen we will begin seeing 24V power supplies to deal with this.


> I think next gen we will begin seeing 24V power supplies to deal with this.

May as well go the whole hog & jump to 48V.

(50V is as high as you can go whilst still being inside the “low voltage” electrical safety regime in most countries IIRC.)


General SELV limit is 60V, that's why PoE is 54≈56V at the source (it's calculated at roughly 10% tolerance so it can be built cheaply.)

Then the graphics card would have to have a transformer on it to step down to the voltage that the chips can handle.

They already do - most of the components buck the 12V down to the 1.3ish volts that the GPU core needs

They are not transformers, though. The coil/chokes are not galvanically isolated which makes them (more) efficient. Stepping down from 48V to 0.8V (with massive transient spikes) is generally way harder than doing it from 12V. So they may ended with multi step converters but that would mean more PC with more passives.

3.3V from 48V is a standard application for PoE. (12V intermediate is more common though.) The duty cycle does get a bit extreme. But yes, most step-down controllers can't cover both an 0.8V output voltage and 48-60V input voltage. (TI Webench gives me one - and only one - suggested circuit, using an LM5185. With an atrocious efficency estimate.)

You'd probably use an intermediate 12V rail especially since that means you just reuse the existing 0.8V regulator designs.


So then it would need to be significantly larger.

Likely smaller actually.

This isn’t how it works.

Your SMPS needs sub-2V output, cool. That means it only needs to accept small portions of the incoming.

But, if the incoming is 48V, it needs 48V tolerant parts. All your caps, inductor (optional typically), diodes, the SMPS itself.

Maybe there isn’t a sides difference in a 0603 50V capacitor and 10V 0603 capacitor, but there is a cost difference. And it definitely doesn’t get smaller just because.

Your traces at 48V likely need more space/separation or routing considerations that they would at 24V, but this should be a quickly resolved problem at your SMPS is likely right next to your connector.


Yes. And it also doesn’t need to handle 40+ AMPs on input, with associated large bus bars, large input wires, etc.

Extra insulation is likely only a mm or two, those other components are big and heavy, and have to be.

It’s the same reason inverters have been moving away from 12v to 48v. Larger currents require physically larger and heavier parts in a difficult to manage way. Larger voltages don’t start being problematic until either > 48v or >1000v (depending on the power band).


>> Connectors are actually extremely difficult to make.

While your points listed are valid, we have been making connectors that overcome these points for decades, in some cases approaching the century mark.

>> I'm not surprised at all that they are running into issues here, these cards are pulling 500+ watts. That is a LOT of current.

Nonsense. I used to work at an industrial power generation company. 500W is _nothing_. At 12VDC, that is 41.66A of current. A few, small, well made pins and wires can handle that. It should not be a big deal to overcome that. We have overcome that in cars (which undergo _extreme_ temperature and environmental changes, in mere minutes and hours, daily, for years), space stations (geez), appliances, and thousands of other industrial applications that you do not see (robots, cranes, elevators, equipment in fields and farmlands, equipment in mines, equipment misused by people)... and those systems fail less frequently than Nvidia connectors. But your comment would lead one to think that building a connector with twelve pins on it to handle a whopping (I am joking) 500W (not much, really, I have had connectors in equipment that needed to handle 1,000,000Watts of power, OUTDOORS, IN THE RAIN, and be taken apart and put back together DAILY) is an insurmountable task.


One word: cost.

Look up how much industrial/automotive connectors cost, and you'll see the huge difference in quality.


Yes cheap connectors exist and there is a marked for it, like everything "cheap". But to what point one wants to "defend" a trillion dollar company, on a product that was never marketed as "cheap", that actually comes with a hefty price tag, to skimp on something that is 0.01% of there BoM cost. If you sell for a premium price you should better make sure your product is premium.

Those GPUs aren’t particularly cheap, even a $100 connector and cable wouldn’t be a huge deal breaker for a $2000-3000 device if it means it’s reliable and won’t start a fire (that’ll cost way more than $3100)

I've bought cars that cost me less than a nVidia card (and they were running).

Which new cars cost less than 2000$-1000$?

They didn't say new cars.

used objects and imports from economically isolated land are traded at meme value, doesn't count.

Then what's the point of such an arbitrary comparison? It's normal that plenty of commodities that were expensive when new have been devalued by age and can cost less on the used market than the top of the line BRAND NEW cutting edge GPU today, which itself will be worthless in 10-20 years on the used market and so on.

Presumably, the point is that a working car is more complicated & cheaper (in this case) than the graphics card, while the graphics card can't figure out how to make a connector.

I read it as a kind of funny comment making a broader point (and a bit of a jab at nVidia), not a rigorous comparison. I think you might be taking it a bit more seriously than was intended.


An old legacy car is definitely not more complicated than designing and manufacturing a cutting edge silicon made for high performance compute.

The price difference is just the free market supply and demand at work.

People and businesses pay more for the latest Nvidia GPUs than for an old car because for their use case it's worth it, they can't get a better GPU from anywhere else because they're complex to design and manufacture en-masse and nobody else than Nvidia + TSMC can do it right now.

People pay less for an old beater car than for Nvidia GPUs, because it's not worth it, there's a lot better options out there in terms of cars and cars are interchangeable commodities easy and cheap to design and manufacture at scale at this point, but there's no better options easier to replace what Nvidia is selling.

Comparing a top GPU with old cars is like comparing apples to monkeys, it makes no sense that doesn't prove any point.


>An old legacy car is definitely not more complicated than designing and manufacturing a cutting edge silicon made for high performance compute.

A car is more complicated than a connector, at least.

Anyways, the rest of your comment is again taking a humorous one-liner way too seriously. Thanks for the econ lesson though, I guess. I liked the part where you explained to me the basics of supply and demand like I am in 5th grade.


>A car is more complicated than a connector, at least.

The connectors on a new car cost more than the connectors on a new GPU part for part.

>I liked the part where you explained to me the basics of supply and demand like I am in 5th grade.

You'd be surprised about the state of HN understanding of how basic things in the world work.


That would be relevant if the margins on GPUs weren’t astronomical.

Well surely they can take that cost out of the $5090 people are paying for these cards.

No, not for a connector for 500W, on a $2000 GPU from one of the worlds biggest companies. They can do better.

Nvidia is clearing 4 figures on each 5090. They can afford another few dollars on connectors.

"Nobody will pay $3 for a connector"

I would pay $10.


This whole conversation seems absurd! Of course you'd pay for the right power connector for your multi-thousand dollar card!

You don't buy a $200k sports car and then take it to Jiffy Lube for oil changes. You pony up for the cost of proper maintenance!


I work in power electronics and there are ample connectors that can handle any type of power requirement.

What is happening in the computer space is that everyone is clinging to an old style of doing things, likely because it is free and open, and suggestions to move to new connectors get saddled with proprietary licensing fees.

D-sub power connectors have been around forever (they even look like the '90s still) and would easily be able to power even future monster GPUs. They screw in for a strong connection too, but no reason you couldn't make ones that snap in too.[1]

[1]https://i.ebayimg.com/thumbs/images/g/A0MAAOSwYGFUvkFg/s-l50...


Man would i prefer screw in. I hate snap. All of those things in motherboards that require serious force and if you don't know what you're doing it's quite easy to not realize the reason something isn't going in is because of a block/issue, rather than not enough force. So the user adds more force and bam, something breaks.

Then of course there's just so much force in general it's easy for a finger/hand to slip and bump/hurt something else, etcetc.

I tend not to enjoy PC building because of that. Screws on everything would be so nice imo. Doubly so if we could increase the damn motherboard size to support these insane GPU sizes lol.


You are proposing connector with exposed live 12V pins.

Not a hardware guy, but I wonder if that's a factor in connector choice. Basically, if a significant fraction of PC building is done by teens or young adults building their gaming rig in their living room, with neither formal training nor oversight, do designers have to make sure this is "teenage proof"?

The GPU and PSU would have female ports and the cable would be male.

12V isn't dangerous to humans, but it could spark quite a bit if it hit the computer chassis.


There exists a perfectly balanced point between usability and affordability that, if it can be achieved, makes exactly nobody happy.

GP's point is that "affordability" here is penny pinching considering the cost of the components those cables connect (and are usually included with).

> This space feels ripe for a radical re-design.

Making electrical connectors that do their job safely and properly is a solved problem in the engineering world.

Doing so in a way that allows for maximum profit is not.


What's wrong with the 4/6/8 pin plugs? I find them perfectly good. And they have a high power variant that would have worked much better here, rated for twice the current per pin.

They're the best of the bunch when it comes to PC parts, but think how far off they are in terms of usability compared to USB, or Ethernet, or HDMI, or Displayport, or those old VGA cables you had to screw in, or literally anything else. They only look good in comparison to the other power connectors.

> They're the best of the bunch when it comes to PC parts

Not really, the PSU side isn't standardized at all and it's not obvious at all because the cables will happily fit when you plug cables from PSU A into PSU B and fry your entire build.

Theres no benefit to not having standards on that side, and the other side is all standard so they are able to follow standards there, "It's just the way it's always been" so they keep doing it


Even the now ancient and defunct FireWire 400 connector is nicer than most internal PC connectors.

> how far off they are in terms of usability compared to USB, or Ethernet, or HDMI, or Displayport, or those old VGA cables

Those connectors were not designed to carry power.


USB, especially USB C, is very much designed to carry power. Not quite as much as high end graphics cards guzzle these days but it goes up to 240W. Ethernet, HDMI, DP and even VGA (with extensiosn) are also all used to carry power even if much smaller currents.

It's designed for 5 amps. In this context, that's close enough to "not carrying power".

If we're considering the bigger voltages that allow higher power on USB C, then the existing GPU plugs are fine because we can deliver 600W using the same current as a 150W 8 pin plug.


They want to use the Molex for some reason. That's what doesn't make sense. They could just like, give it two ring connectors and let gamers screw them on. Bigger ones of rings take 50A(*12V = 600W) just fine.

Define the exact "they" you're talking about and you'll start to see the problem.

I'm suspecting it's really less than half a dozen people at NVIDIA, like guys in purchasing division or PCB designers not wanting to make a drastic parts/footprint change. M8 SMD lug terminal in a gamer accessory is crazy, but not rocket science.

That's what I wondered, you may not understand all the players. I believe the PCI standard specifies this Molex connector. Somewhere between what Nvidia ships and the power supply itself, that standard is the only common connection.

No, NVIDIA's use of the connector and first reports of melting predates the spec. They were never had hands tied to use it.

Gaming GPUs are having sagging problems for years too, and little is done to solve it. The cards are bending in their own weight. They're not products of proper engineering.


power requirements of GPU cards are increasing with each generation and pushing power to them become more difficult. Electricity through wire causes heat. More power = more heat resulting in things melting. Even the cable would melt(or explode) if high enough current runs through it. People here are talking about 48 volts instead of 12 volts which is one solution. But more cabling to distribute the current would be easier.

> I still have to sit there stressing out because I have no idea if the PSU<->Mobo power connector is seated properly

I recently switched my PSU and my onboard audio volume halved.

There's no way I'm going to switch back to see if the problem goes away because that connector was such a **ache to undo and reconnect.


My favorite is these shitty RGB connectors. They were obviously very recently decided on, yet somehow what we got is something without any positive retention or determined orientation yet still obnoxiously big.

Part of it is likely backward compatibility.

Discussion (22 points, 1 day ago, 10 comments) https://news.ycombinator.com/item?id=42996057

Why are people using 3rd party cables after the 40 series disaster?

they're cheaper?

That sounds like someone buying a Lamborghini then trying to save money by fueling with regular gas.

Yeah, the people that buy a Lamborghini are some of the "cheapest" people and will squabble over the smallest of things.

I hope that a brand that produce PSUs and GPUs develop a higher voltage rail and a card that goes with it as an open standard.

Wishful thinking, I know.

Especially because I don't even know if they can drift from the nVidia/AMD specs that much before being sanctioned or something.

Yeah, they will be more expensive, but I'd rather pay few bucks more and be safer/not to worry to burn my house down.


IIRC at the last PC trade show (the one in Taipei) one of the GPU+Motherboard makers was showing a prototype system where the PCIe slot had an additional slot behind it just for power, so no cables were required. Of course then that additional power needs to get into (and flow across) the motherboard somehow.

FWIW, I had the same issue with my 3090 (though I believe that uses a slightly different port?). I was using a custom cable like this guy. Nvidia replaced it under warranty, and I went back to using the (ugly) provided adapter.

People are letting their hatred of Nvidia blind them to what happened here, a customer upgraded from a 4090FE to 5090FE, they were using a 3rd party ASUS 12VHPWR cable and didn't realize the 5090FE actually uses 12V-2x6 not 12VHPWR, while the port is the same the pins are different lengths.

End of the day PC power cabling is such a shambles with things that look standard but are not so you should only ever really use the cables that came with the product if you don't want to risk issues, especially with this specific ports poor history with 3rd party cables.


At least according to Corsair, there are no changes to the cable, only the PSU/GPU-side connectors:

> Cable: 12V-2x6 = 12VHPWR No difference!

> So what does this mean if you’ve already got hardware for 12VHPWR? Fortunately, existing 12VHPWR cables and adapters will work with the new 12V-2x6 connector as the new changes are only related to the GPU and some PSUs (Our new RMx PSUs for example). The cables you've got already will work fine, so don't worry.

https://www.corsair.com/uk/en/explorer/diy-builder/power-sup...


Connectors are where the issue is and there is a difference even if they fit in the same plugs and power can still go through them.

From your link

> Compared to the original 12VHPWR connector, the new 12V-2x6 connector has shorter sensing pins (1.5mm) while the conductor terminals are 0.25mm longer


AIUI, the connector _on the GPU/PSU_ is slightly different, but the connector on the cable is the same:

> As with any new standard, things are likely to evolve quickly and we’re now seeing the introduction of a new connector on the GPU and the PSU side of things. To be clear, this is not a new cable, it is an updated change to the pins in the socket, which is referred to as 12V-2x6.

Corsair's messaging on Reddit[1] emphasises this:

> Cable is the same. a 12VHPWR cable is a 12V-2x6 cable. it is ONLY the plugs on the graphics card / power supply that have changed.

> The cable is the same. 12VHPWR = 12V-2x6. You will get the exact same cable if you upgrade to a new PSU.

> As mentioned in image one, the cable is the same. Only the plug on the graphics card / PSU changed from 12VHPWR to 12V-2x6.

[1] https://www.reddit.com/r/Corsair/comments/1ha9no1/ive_made_s...


That's inconsistent messaging from Corsair, then. Parent comment quotes the times they're like "ehh, they're same thing, don't worry about it" and then they go on to say "well TECHNICALLY there's a teeensy difference in conductor sizes"???

Either they are confident that the 0.25mm terminal difference is within tolerance enough that they consider 12VHPWR to be functionally equivalent to 12V-2x6, or they're getting themselves confused let alone the target audience of their article.


NVIDIA made the 12V-2x6 port retro compatible with the previous 4000 series 12VHPWR. If you make your port compatible with past gen and it breaks with it its a design flaw, dont make it retro compatible.

This is not a user error, this is NVIDIA design error.


If the new 12V-2x6 connector was incompatible with the old 12VHPWR connector, they should have (and would have) made it physically incompatible. They didn’t. You cannot blame the user for doing something which is specifically allowed by design.

> You cannot blame the user for doing something which is specifically allowed by design.

We really saying this when swapping a PSU to a different PSU and reusing existing cables that all look and plug in the same fries your build.

I think it's utterly absurd that this is the case but that's PC components for you.


The cables between 12V-2x6 and 12VHPWR are identical, it's the port that has different pin lengths (shorter sensing pin, longer conductor pins) to allow for better detection of poorly seated cables and better conductivity while loose.

Pedantry between "cable" and "connectors" and claiming one if the same doesn't help people understand the situation.

The two standards are compatible, the port end hasn't changed, just the connector. The shorter sense pins are just designed to help detect an improperly connected cable, so that the sense pins only connect when the power pins are definitely connected.

Buildzoid and others have covered the design issues of the 12VHPWR cable and especially it's horrific implementation on the 5090FE well enough[1] that I don't think it's worth going into too much detail here. For some god forsaken reason, they decided to just dump all of the voltage lines in parallel and then run a single shunt resistor to it, so if a conductor fails and the load becomes improperly distributed, there's no way for the card or the user to know until it catches fire. It's hard to come up with a reasonable justification for this.

But just so we're clear, there are 2 reports of catastrophic failures with the 5090 already, which should be even more alarming considering how few 5090 actually exist right now. The other failure didn't involve third-party cables.

Of course, if used improperly, you can burn through basically any cable, and any cable can fail... but when the failure rate of a specific cable is so high above the rest, it raises many questions. If a specific model of aircraft seems to have an oddly bad problem with pilot error, you can't just shrug that off. In my opinion, consumer computer equipment is the same. It shouldn't light on fire unless you've done something horribly wrong. And even if you do something horribly wrong, the hardware should at least be designed in a way that gives it a chance at failing gracefully first. The connectors that 12VHPWR replaced were specced with good safety margins and previous NVIDIA cards were designed to ensure current was balanced across voltage lines.

It is unclear why NVIDIA didn't see the issue with the 12VHPWR last generation and put some serious effort into fixing the problem. If they continue recklessly like this, there is a non-zero chance that the 12VHPWR connector is only retired after it finally causes loss of life.

[1]: https://youtu.be/kb5YzMoVQyw


Wait so Nvdia made a connector that is physically but not electrically compatible with their previous generation and you think that's an argument for not blaming them?

"didn't realize the 5090FE actually uses 12V-2x6 not 12VHPWR, while the port is the same the pins are different lengths"

Broken by design.

I use always the cables that are shipped with the psu and noth the cables from gpu.


Building PCs has gone pretty mainstream at this point. Cases where it is easy to melt the thing by plugging it in wrong should be pretty rare.

Another video today from Der8auer showed him reproduce the issue on his model. He showed a 22A load and 150°C point on a single wire. The problem seem to be much worse somehow, clearly no balancing between the wire on the founder edition.

Third party cables were never an issue on sanely designed ports with power balancing.

It's strange how Nvidia just doubled down on a flawed design for no apparent reason. It doesn't even do anything, the adapter is so short you still have the same mess of cables in the front of the case as before.

I was under the impression it saves them money. Is that correct?

It is also a powerplay. By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

Plus probably some internal arrogance about not admitting failures.


That's the majority understanding, but I suspect it was a simple "update" into "same" connector - the old one was a product called Molex Mini-Fit, and the new one is their newer Micro-Fit connector.

> Plus probably some internal arrogance about not admitting failures.

Arrogance is good. Accelerates the "forced correction" (aka cluebat) process. NVIDIA needs that badly.


>By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

I suppose but this could be overcome by AMD/Intel shipping an adapter cable


It saves them money on a four-digit MSRP. I think they could afford to be less thrifty.

The connector is a PCI spec, it's not an Nvidia thing, it's just they introduced devices using it first.

I don't think thats correct. Nvidia used that connector first and then a similar PCI spec came out. Compatibility is limited. See https://www.hwcooling.net/en/nvidia-12pin-and-pcie-5-0-gpu-p... from back then.

I'd forgotten about the weird 30 series case, but the 40/50 series ones are the PCI spec connector.

Being a PCI spec connector doesn't mean it isn't an Nvidia thing. It seems pretty likely at this point that Nvidia forced this through, seeing as there's zero other users of this connector. Convincing PCI spec consortium to rubber stamp it probably wasn't very hard for Nvidia to do.

> By potentially introducing a PSU connector AMD and Intel do not use they abuse their market power to limit interoperability.

They are free to use them, they just don’t because it is a stupid connector. The cards that need 600W are gonna need an enormous amount of cooling, therefore they will need a lot of space anyway, no point in making the connector small.

Yes, NVIDIA created an amazingly small 5090 FE, but none of the board partners have followed suit, so most customers will see no benefit at all.


I doubt engineering a new connector (I think it's new? Unlike the Mini-Fit Jr which has been around for like 40-50 years) and standing up a supply chain for it could offset the potentially slightly lower BOM cost of using one specialty connector instead of three MiniFit Jr 8-pins. However, three of those would not have been enough for the 4090, nevermind the 5090.

> three of those would not have been enough for the 4090, nevermind the 5090.

Oh you are right these PCIe power connectors can only draw 150W, so you would need 4 of those for 4090/5090. I guess that makes sense then to create a new standard for it actually, hopefully they can make a newer revision of that connector that makes it safer.

In theory with the new standard you can have a single cable from the PSU to the GPU instead of 4, which would be a huge improvement. Except if you use those and then your PC catches fire, you will be blamed by the community for it. People on the reddit thread [1] were arguing that it was his own fault for using a "third party" connector.

[1] https://www.reddit.com/r/nvidia/comments/1ilhfk0/rtx_5090fe_...


EPS is practically identical to PCIe, just keyed slightly differently, and it can handle 300W. It's used for the CPU power connector and on some data centre GPUs. I've never been clear on why it didn't take over from the PCIe standard when more power was needed.

The old Mini-Fit takes 10A/pin, or theoretically 480W for 8 pin. Existing PSUs would not be rated for that much current per the PCIe harness, so the connector compatibility has to be intentionally broken for idiot proofing purposes, but connector wise up to 960W before safety margins can be technically supplied fine with just 2x PCIe 8p.

This connector somehow has it's own Wikipedia page and most of it is about how bad it is. Look at the table at the end: https://en.wikipedia.org/wiki/16-pin_12VHPWR_connector#Relia...

The typical way to use these is also inherently flawed. On the nVidia FE cards, they use a vertical connector which has a bus bar connecting all pins directly in the connector. Meanwhile, the adapter has a similar bus bar where all the incoming 12V wires are soldered on to. This means you have six pins per potential connecting two bus bars. Guess how this ensures relatively even current distribution? It doesn't, at all. It relies completely on just the contact resistance between pins to match.

Contrast this with the old 8-pin design, where each pin would have it's own 2-3 ft wire to the PSU, which adds resistance in series which each pin. That in turn reduces the influence of contact resistance on current distribution. And all cards had separate shunts for metering and actively balancing current across the multiple 8-pin connectors used.

The 12VHPWR cards don't do this and the FE cards can't do this for design reasons. They all have a single 12 V plane. Only one ultra-expensive custom ASUS layout is known to have per-pin current metering and shunts (but it still has a single 12 V plane, so it can't actively balance current), and it's not known whether it is even set up to shut down when it detects a gross imbalance indicating connector failure.


Can't have 4 connectors going into 1 video card, that would look ridiculous :/

- Nvidia


If we’re going to keep up these kilowatt scale cards, we’re just going to need higher voltage rails on PSUs. I had a bunch of similar dumb power connector problems when my 4090 was new.

Or just give up and provide a wall power cord.

actually, not a bad idea.

Is it effective to step down from 24/48 volts to the 1-2 range? Or would cards need two stages of voltage conversion?

It is tricky but possible. It is being done in the data center for certain hyperscalers. I wonder if Oxide Computer is doing it also, they mentioned a high voltage DC bus I think. https://epc-co.com/epc/about-epc/gan-talk-blog/post/14229/48...

19V is pretty standard in notebooks so 19-24V could probably be done with fairly little trouble. 48V would entail developing a whole new line of capacitors, inductors, power stages (transistors), and controllers.^1

^1: yes, of course, 48V compatible components exist for all of those. But the PC industry typically gets components developed for its specific requirements because it has the volume.


TL;DR: Yes there is a small difference in efficiency, but it's still plenty efficient.

You need a switching regulator for the current 12V anyway (as opposed to linear regulator which are much simpler but basically just burn power to reduce voltage) so the question is if increasing the voltage 2-4x while keeping same power requirements makes a difference.

- You need higher voltage rated components (mainly relevant for capacitors), potentially bit more expensive but negligible for GPU costs. The loss due to inductor will be higher too (same DC resistance but higher voltage => higher current, more power), but this is negligible.

- On the other hand you need less thick traces/copper, and have more flexibility in the design.

For some concrete numbers, here is a random TI regulator datasheet [1], check out figure 45. At 5V 3A output, the difference in efficiency between 12V, 24V and 42V inputs is maybe 5%.

I think the main problem is the industry needs to move together. Nvidia can't require 24/48V input before there is standard for it and enough PSUs on the market offer it. This seemingly chicken-and-egg situation has happened in the past a bunch of times, so it's not a big problem, but will take a while.

[1] https://www.ti.com/lit/ds/symlink/tps54340.pdf


Or 2 cables.

Recall in 3, 2, 1...

Why can’t they just use a cable+socket similar to PSU - wall socket? It’s not even multiple-kilowatts range.

I'm aware of at least one card which did this, which was a custom OEM design (specifically from Asus) which put two Geforce 7000-series GPUs on a single card: https://pcper.com/2005/10/asus-n7800gt-dual-review-7800-sli-...

Thankfully, I've never seen something like it since then.


3dfx did it even earlier with the Voodoo 5 6000 all the way back in 2000.[1][2]

[1] https://www.extremetech.com/gaming/325466-i-wrote-the-first-...

[2] https://www.techpowerup.com/gpu-specs/voodoo5-6000.c3536


It's 12V, which means the currents are very high (like >40A). It feels like perhaps they need higher voltage power supplies.

I think the problem with this is that chips can only use a relatively low voltage around 1-1.5 volts. So if you supply 48 volts to the card it still has to be stepped down and this means more components and heat dissipation on the card. We are basically arriving at the idea of graphic cards having their own integrated PSUs, but this doesn't fit well with the current physical design of computers.

I feel like it could work with an external power brick and the card exposing a dedicated external port.

For what purpose? That gives you similar cabling problems to internal connections, but now that cable is far less protected.

It would be a different connector and it moves some mass and heat outside of the case.

If you want a different connector then just use a different connector.

Moving 5% of the mass and 1% of the heat outside the case is a bad thing. One of the main purposes of the case is to be one big hunk of mass.


The cards supply 12 volts today, which is stepped down. There isn't a significant difference between 48 volts and 12 volts other than the amperage

That connector (C13) is rated for 15 amps. That's 180W at 12V.

So heat depends on current, not on power. My research: https://www.reddit.com/r/ElectricalEngineering/comments/15xf...

No it depends on power, but the power dissipated by the cable, not the power through the cable. The power dissipated is i^2 * r, where r is the resistance of the cable and i, crucially, is the current through the cable which depends on the power it's supplying (which with a resistive load, in this case, is i * 12v).

Yes, but it seems the connectors, not the entire cable are too high resistance.

Using a larger diameter wire would drop resistance in the cable, but if it has to go through the same connector, it will likely still get hot.

Also, NVDA might be telling the truth about poorly seated connectors, that could raise resistance and heat significantly. That could also be handwaving away a business decision to move forward with a design with too little margin.


And the card requires up to 600 watts, which is 50 amps if the supply is 12 volts.

I’m not sure what you’re trying to suggest here… the PSU is also connected to the wall with a C13 connector, and is able to supply 600W at 12VDC to the card.

What’s wild is that this is a company clearly capable of designing highly complex things with numerous tradeoffs, challenges and unknowns. And then the fuckin cable is the issue. Repeatedly.

This is because they are trying to parallel like 50amps (it's 12 volt IIRC) over a few conductors to get to 600watts.

If it becomes unbalanced due to any number of reasons, none of those individual cables can come close to handling it - they will all generate enough heat to melt lots of things.

Conservatively, they'd have to be 8awg each to be able to handle the full load without melting if they ended up taking the full load onto a single conductor.

That's the crappy part about low voltages.

If the voltage was higher (i believe 'low volt' classification tops out at 48v), it'd be more dangerous to deal with in some aspects, but it'd be easier to have small cables that won't melt.


Can we talk about how absolutely terrifying is that 600W figure? We're not transcoding or generating slop as the primary use case, we're playing computer games. What was wrong with the previous-generation graphics that we still need to push for more raw performance, rather than reducing power draw?

What was “wrong” is that enough people are willing to pay exorbitant prices for the highest-end gear that Nvidia can do most anything they want as long as their products have the best numbers.

Other companies do make products with lower power draw — Apple in particular has some good stuff in this space for people who need it for AI and not gaming. And even in the gaming space, you have many options for good products — but people who apparently have money to burn want the best at any cost.


We must be thinking about very different types of games, because even though I’m completely bought into the Apple ecosystem and love my M3 macbook pro and mac mini, I have a windows gaming PC sitting in the corner because very few titles I’d want to play are available on the mac.

Take a step back, perspectively.

1. People want their desktop computers to be fast. These are not made to be portable battery sippers. Moar powa!!!

2. People have a powerpoint at the wall to plug their appliances into.

Ergo, desktop computers will tend towards 2000w+ devices.

"Insane!" you may cry. But a look at the history of car manufacture suggests that the market will dictate the trend. And in similar fashion, you will be able to buy your overpowered beast of a machine, and idle it to do what you need day to day.


Well exactly my point. I'm "still" using an M1 Mac mini as my daily driver. 6W idle. In a desktop. It is crazy fast compared to the Intel Macs of the year before, but the writing was already on the wall: this is the new low-end, the entry level.

Still? It runs Baldur's Gate 3. Not smoothly, but it's playable. I don't have an M4 Pro Max Ultra Plus around to compare the apples to apples, but I'd expect both perf and perf per watt to be even better.

If one trillion dollar company can manage this, why not the other?


Is the primary use case for *090 series gaming anymore? 5070 which is probably what most popular gaming card is 250W. If I recall correctly it can push 4k @ 60fps for most games.

But yes, I do agree that TDPs for GPUs are getting ridiculous.


4k 60Hz is still largely unachievable for even top of the line cards when testing recent games with effects like raytracing turned up. For example, an RTX 4090 can run Cyberpunk 2077 at 4k at over 60fps with the Ray Tracing Low preset, but not any of the higher presets.

However, it's easy to get misled into thinking that 4k60 gaming is easily achieved by more mainstream hardware, because games these days are usually cheating by default using upscaling and frame interpolation to artificially inflate the reported resolution and frame rate without actually achieving the image quality that those numbers imply.

Gaming is still a class of workloads where the demand for more GPU performance is effectively unlimited, and there's no nearby threshold of "good enough" beyond which further quality improvements would be imperceptible to humans. It's not like audio where we've long since passed the limits of human perception.


4k@60 isn't all that good today and 5070 can do it with reduced graphics in modern games.

x90 cards IMO are either bought by people that absolutely need them (yay market segmentation) or simply because they can (affording is another story) and want to have the best of the latest.


Is your argument that computer games don't merit better performance (e.g. pushing further into 4K) and/or shouldn't expand beyond the current crop and we give up on better VR/AR ?

This genereation seems that is getting performance using more power and more cores. Not really an architectural change but only packing more things in the chip that require more power.

Too true. I've been looking replace my 1080. This was a beast in 2016, but the only way I can get a more performant card these days is to double the power draw. That's not really progress.

Then get a modern GPU and limit the power to what your 1080 draws. It will still be significantly faster. GPU power is out of control these days, if you knock 10% off the power budget you generally only lose a few percentage of performance.

Cutting the 5090 down from 575w to 400w is a 10% perf decrease.


Even if I knew how to do that, I'd still need double the power connectors I currently have.

That's because 1080 and whole 10xx generation was pinacle and is the best GPU nvidia ever made. Nvidia won't make the same mistake any time soon.

Why should we reduce power draw? We live in an age of abundance.

Can you point me to the abundance? Because I sure can point you to the consequences of thinking we live in an age of abundance.

If only we had connectors which could actually handle such currents. Maybe something along the lines of an XT90, but no Nvidia somehow wants to save a bit of space or weight on their huge brick of a card. I don't get it.

The USB-C connectors on laptops and phones can deliver 240 watts [1] in a 8.4x2.7mm connector.

12VHPWR is 8.4x20.8mm so it's got 7.7x the cross-sectional area but transmits only 2.5x the power. And 12VHPWR also has the substantial advantage that GPUs have fans and airflow aplenty.

So I can see why someone looking at the product might have thought the connector could reasonably be shrunk.

Of course, the trick USB-C uses is to deliver 5A at 48v, instead of 50A at 12v

[1] https://en.wikipedia.org/wiki/USB-C#Power_delivery


Nobody thought that they could push 50A at 12V through half the connector. It's management wanting to push industrial design as opposed to safety. They made a new connector borrowing from an already existing design, pushed up the on paper amperage by 3A, never changed the contact resistance, and made the parent connector push current near it's limit (10.5A max vs 8.3A). And oh, the insertion force is so, so much higher than ever before. Previous PCIe connectors push about 4A through a connector designed for about 13A.

Worth also mentioning that the same time the 12VHPWR connector was being market tested was during Ampere, the same generation where Nvidia doubled down on the industrial design of their 1st party cards.

Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.


> Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.

The 16-inch Framework laptop can take 240W power. For chargers, the Delta Electronics ADP-240KB is an option. Some Framework users have already tried the combination.


> Nobody thought that they could push 50A at 12V through half the connector.

If you're saying that the connector doesn't have a 2x safety factor then I'd agree, sure.

But I can see how the connector passed through the design reviews, for the 40x0 era cards. The cables are thick enough. The pins seem adequate, especially assuming any GPU that's drawing maximum power will have its fans producing lots of airflow; plenty of connectors get a little warm. There's no risk of partial insertion, because the connector is keyed, and there's a plastic latch that engages with a click, and there's four extra sense pins. I can see how that would have seemed like a belt-and-braces approach.

Obviously after the first round of melted connectors they should have fixed things properly.

I'm just saying to me this seems like regular negligence, rather than gross negligence.


The spec may say it, but I've never encountered a USB-C cable that claims to support 240 watts. I suspect if machines that tried to draw 240W over USB-C were widespread, we would see a lot of melted cables and fires. There are enough of them already with lower power draw charging.

Search Amazon for "240W USB" and you get multiple pages of results for cables.

A few years ago there was a recall of OnePlus cables that were melting and catching fire, I had 2 of them and both melted.

But yes 240W/48V/5A is insane for a spec that was originally designed for 0.5W/5V/100mA. I suspect this is the limit for USB charging as anything over 48V is considered a shock hazard by UL and 5A is already at the very top of the 3-5A limit of 20AWG for fire safety.


We've had a variety of 140W laptops for a few years already, so the original spec has been far away for a while now.

The advantage of USB-C is the power negotiation, so getting the higher rating only on circuits that actually support it should de doable and relatively safe.

The OnePlus cable melting give me the same impression as when hair power cables melt: it's a solved problem, the onus is on the maker.


240W cables are here but at around a 10x price premium. Also cables are chipped so e.g. a 100W cable won't allow 240 in the first place.

Users needing the 240W have a whole chain of specialized devices, so buying a premium cable is also not much of an issue.


The connector could reasonably be shrunk. It just now has essentially no design margin so any minor issue immediately becomes major! 50A DC is serious current to be treated with respect. 5A DC is sanely manageable.

If only we had electrical and thermal fuses that could be used to protect the connectors and wires.

At these wattages just give it its own mains plug.

> At these wattages just give it its own mains plug.

You might think you're joking, but there are gamer cases with space for two PSUs, and motherboards which can control a secondary PSU (turning both PSUs on and off together). When using a computer built like that, you have two main plugs, and the second PSU (thus the second mains plug) is usually dedicated to the graphics card(s).


Server systems already work like this for redundancy.

I've done this, without a case, not because I actually used huge amounts of power, but because neither PSU had the right combination of connectors.

The second one was turned on with a paperclip, obviously.

Turns out graphics cards and hard drives are completely fine with receiving power but no data link. They just sit there (sometimes with fans at max speed by default!) until the rest of the PC comes online.


You can also hookup a little thingy that takes sata power on one side and 24 pin on the other. As soon as there is power on sata side, relay switches and second PSU turns on.

Also put it in a separate case, and give it an OcuLink cable to attach to the main desktop tower. I suspect that's exactly where we're heading, to be fair.

I've built video rigs that did just that. An external expansion chassis that you could put additional PCIe cards when the host only had 3 slots. The whole eGPU used to be a cute thing, but it might have been more foreshadowing than we realized.

Have you measured latency?

In modern(last 4 years approximately) GPUs, physical wiring distance is starting to contribute substantially to latency.


latency due to wiring distances is far from being an issue in these scenarios. The signals travel at the speed of light. 186 miles per millisecond.

The problem you will encounter with pcie gen5 risers is signal integrity.


There were no latency concerns. These were video rigs, not realtime shoot'em ups. They were compute devices running color correction and other filters type of thing, not pushing a video signal to a monitor 60fps 240Hz refresh nonsense. These did real work /s

Not without precedent: The Voodoo 5 6000 by 3dfx came with its own external PSU almost 25 years ago.

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQqLXew...


true. at this prices they might as well include a power brick and take responsibility of the current carrying path from the wall to the die.

We could also do like we do on car audio, just two big fat power cables, positive and negative, 4awg, or even bigger with a nice crimped ferrule or lug bolted in.

> If only we had connectors which could actually handle such currents.

The problem isn't connectors, the problem (fundamentally) is to share electric connectivity between multiple conductors.

Sure, you can run 16A over 1.5 mm² wires, and 32A over 2.5 mm² (taken from [1], yes it's for 230V but that doesn't matter, the current is important not the voltage). And theoretically you could run 32A over 2x 1.5 mm² (you'd end up with 3 mm² cross section), but it's not allowed by code as when, for any reason, either of the two legs disconnects entirely or has increased resistance e.g. due to corrosion or a loose screw / wire nut (hence, please always use Wago style clamps - screws and wire nuts are not safe, even if torqued properly which most people don't), suddenly the other leg has to carry (much) more current than it's designed for and you risk anything from molten connectors to an outright fire. And that is what NVidia currently is running into, together with bad connections (e.g. due to dirt ingress).

The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.

Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.

Another and even more annoying issue by the way is grounding - because all the electricity that comes in also wants to go back to the PSU and it can take any number of paths - the PCIe connector, the metal backplate, the 12VHPWR extra connector, via the shield of a DP cable that goes to a Thunderbolt adapter card's video input to that card, via the SLI connector to the other GPU and its ground...

Electricity is fun!

[1] https://stex24.com/de/ratgeber/strombelastbarkeit


This is the top end halo product. What's wrong with pushing the envelope? Should we all play tetris because "what's wrong with block graphics?".

I'm not defending the shitty design here, but I'm all for always pushing the boundaries.


Pushing the boundaries of a simple connector is not innovation, that's just reckless and a fire hazard.

> If the voltage was higher (i believe 'low volt' classification tops out at 48v)

Yep, 48V through sensitive parts of the body could be unpleasant but 24V is almost as safe as 12V. Why didn't they use 24V and 25A to achieve required 600W of power instead of 12V and 50A?


Because no PC power supply has a 24V rail and even though there's a fancy new connector you can still use an adapter to get the old-fashioned plugs.

After all you don't want to limit your market to people who can afford to buy both your most expensive GPU and a new power supply. In the PC market backwards compatibility is king.


>Because no PC power supply has a 24V rail

Servers with NVIDIA H200 GPUs (Supermicro ones for example) have power supplies that have 54 volt rail, since that gpu requires it. I can easily imagine a premium ATX (non-mandatory, optional) variant that has higher voltage rail for people with powerful GPUs. Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.


A server is not a personal computer. We are talking about enthusiast GPUs here who will install these components into their existing setup whereas servers are usually sold as a unit including the power supply.

> Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.

Bold of you to assume that Nvidia would be willing to cut into its margin to provide an optional feature with no marketable benefit other than electrical safety.


Electrical safety -> not destroying your GPU does seem like something sellable.

It could probably be spinned into some performance pitch if you really want to.


A higher input voltage may eventually be used but a standard PC power supply only has 12V and lower (5V and 3.3V) available, so they'd need to use a new type of power supply or an external power supply, both of which are tough sells.

On the other hand, the voltages used inside a GPU are around 1V, and a higher input voltage introduces lower efficiency in the local conversion.

12V is only really used because historically it was available with relatively high power capacity in order to supply 12V motors in disk drives and fans. If power supplies were designed from the ground-up for power-hungry CPUs and GPUs, you could make an argument for higher voltage, but you could also make an argument for lower voltage. Or for the 12V, either because it's already a good compromise value, or because it's not worth going against the inertia of existing standards. FWIW there is a new standard for power supplies and it is 12V only with no lower or higher voltage outputs.


Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

Since they went so far to create a new cable which wont be available on old PSU they would have easily extended that slightly and introduced an entirely new PSU class which has a new voltage also. But now they went the easy route and it failed which is even worse as they will have to redesign it now instead of it being safely done the first time.


> Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

It's not like that. It's a design where PSU only provides 12V to motherboard and motherboard provides the rest. Only location of those connectors change. It's called ATX12VO.

In modern PC almost nothing draws from 3v3 rail, not even RAM. I'm pretty sure nothing draws 3v3 directly from PSU at all today.

5v rail directly from PSU only used for SATA drives.


Because nobody makes 24V power supplies for computers, they'd have to convince the whole industry to agree on new PSU standards.

> they'd have to convince the whole industry to agree on new PSU standards.

We already have a new PSU standard, it's called ATX12VO and drops all lower voltages (5V, 3.3V), keeping only 12V. AFAIK, it's not seen wide adoption.


It's also of no use for the problem at hand, PCIe already uses 12V but that's way too low for the amount of power GPUs want.

It's not great. Dropping 5V makes power routing more complicated and needs big conversion blocks outside the PSU.

I would say it makes sense if you want to cut the PSU entirely, for racks of servers fed DC, but in that case it looks like 48V wins.


There are already huge conversion blocks outside the PSU. That's why they figured there's no need to keep an extra one inside the PSU and run more wiring everywhere.

Your CPU steps down 12 volts to 1 volt and a bit. So does your GPU. If you see the big bank of coils next to your CPU on your motherboard, maybe with a heatsink on top, probably on the opposite side from your RAM, that's the section where the voltage gets converted down.


Those are actually at the point of use and unavoidable. I mean extra ones that convert to 5V and then send the power back out elsewhere. All those drives and USB ports still need 5V and the best place to make it is the PSU.

Yup, exactly. The VRMs on my Threadripper board take up quite a bit of space.

24VDC is the most common supply for industrial electronics like PLCs, sensors etc. It is used in almost every type of industrial automation systems. 48VDC is also not uncommon for bigger power supplies, servos, etc.

https://www.siemens.com/global/en/products/automation/power-...


Cutting the ampacity in half from 50A to 25A only drops the minimum (single) conductor size from #8 to #10, also there is no 24v rail in a PSU.

But you would then need to bring it down to the low voltages required by the chips and that would greatly increased cost, volume, weight, electrical noise and heat of the device.

Nah, modern GPUs are already absolutely packed with buck converters, to convert 12v down to 2v or so.

Look at the PCB of a 4090 GPU; you can find plenty of images of people removing the heatsink to fit water blocks. They literally have 24 separate transistors and inductors, all with thermal pads so they can be cooled by the heatsink.

The industry could change to 48v if they wanted to - although with ATX3.0 and the 16-pin 12VHPWR cable being so recent, I'd be surprised if they wanted to.


They could make a new spec for graphics cards and have a 24v/48v rail for them on a new unique connector.

I guess the problem is not only designing the cards to run on the higher voltages but also getting AMD and Intel on board because otherwise no manufacturer is going to make the new power supplies.


IIRC the patchwork of laws, standards and regulations across the world for low voltage wiring is what restricted voltage in the 36 V – 52 V range. Some locations treating it as low, some as an intermediate and others treated it as high voltage.

It may be marine market specific, but several manufacturers limit to 36v for even high amperage motors because of it.

Obviously I=V/R will force this in the future though.


USB PD can go up to 48V so I'd assume that's fine from a regulatory standpoint.

Going from 12V to 48 means you can get 600W through an 8-pin with a 190% safety factor, as opposed to melting your 12VHPWR.


Of course there is, same on motherboards and to a smaller extent hard drives.

The voltage step-down is already in place, from 12V to whatever 1V or 0.8V is needed. Doing the same thing starting from 48V instead of 12V does not change anything fundamentally, I guess.

It changes a lot. You are switching at different frequencies and although the currents are smaller, there is an increased cost if you want to do it efficiently and not have too many losses.

But anyway for consumer products this is unlikely to happen because it would force users to get new power supplies which would reduce their sales quite drastically at least for the first one they make like that.

The solution would maybe be to make a low volume 48V card and slowly move people over it showing them it is better?

Anyway this is clearly not a case of "just use X" where X is 48V. It is much more subtle than that.


> a low volume 48V card

I wouldn't be shocked if someone told me that Nvidia already sells more 48V parts than consumer 12V parts.


48V would work with significantly cheaper wiring.

Yes. I'm not suggesting they increase the voltage, as i said, there are lots of tradeoffs.

But i'll also say - outside of heat, all of the things you listed are not safety concerns (obviously, electrical noise can be if it's truly bad enough, but let's put that one mostly aside).

Having a small, cost efficient, low weight device that has no electrical noise is still not safe if it starts fires.


When you work with normal AC power, it is considered unsafe practice to use parallel wires to share load in a circuit. Reason: one might get decoupled somehow, you don’t notice, and when fully loaded the heat causes a fire risk. This problem sounds similar. A single fat wire is the easiest, but I guess it’s not that simple.

> This problem sounds similar. A single fat wire is the easiest, but I guess it’s not that simple.

The problem is the 12V architecture, so the only way you can ramp power up is to increase amperage, and sending 50A over a single wire would probably require 8AWG. That's... really not reasonable for inside a PC case.

Then again, burning down your house is somewhat unreasonable too.


> When you work with normal AC power, it is considered unsafe practice to use parallel wires to share load in a circuit.

The NEC permits using conductors #1/0AWG or larger for parallel runs, it doesn’t forbid it entirely.


Yeah. I have 800 amp service which is basically always done with parallel 400 mcm or 500 mcm (depending on where it is coming from, since POCO doesn't have to follow NEC)

Within conduit, there is basically no other option. In free air there are options (750 mcm, etc).

Even if there were, you could not pay me to try to fish 750 mcm through conduit or bend it


The 8awg would need a massive connector, else it will still melt/desolder.

Would be trivial to add a fuse / resettable breaker inline.

That would be a novel failure mode: the GPU scheduler had an unbalanced work load across the cores and tripped a breaker. The OS can reset? Kill the offending process "out of power"?

Both of my previous cars had door recalls.

Ford Focus Mk3 and Prius Prime 2024.

Yup. The door had weird failure cases that needed a recall.

--------

Connectors and cables is damn near Masters level in knowledge and application. Its a rarely studied and often ignored piece of engineering. The more you learn about them, the crazier it gets.

That being said, this news that the 4090 and 5090 are using but one shunt resistor for all 6 power pins is horrifying to me. I'm not Power Engineer but it looks horrifyingly wrong to my napkin math.

People underestimate the problems of physical design or the design effort needed to make good designs.


A bit off topic, but this is something HN might enjoy. It’s a video by a mechanical engineer that worked for Tesla, Apple and a NASCAR team about Tesla door handles over time.

https://youtu.be/Bea4FS-zDzc


I see content like this, and it inspires me for what I want my retirement to be like. Just some crazy old eccentric puttering about, tinkering in his workshop. I want my back yard to look like a solar punk world's fair.

A card that draws 600 watts likely already has more than 6 phases of power conversion, so it could put a separate phase on each pin, and one more on the PCIe slot power, and then guarantee load balancing as well as being able to detect any single broken connection.

Could do yes.

But when Founders Edition cards made by NVidia are $2000 and the FE editions have no such mechanism, why would any AIB maker go above and beyond?

You just make your cards more expensive and it's difficult to tell consumers what the difference is exactly.


Yeah, but we also have reliable mature connectors for decades, and they try to make new connectors and it’s not rocket science to transfer that kind of load.

Reminds me a bit of BMWs and their infamous and persistent coolant pump woes. The running joke in those circles is "replacing the entire cooling system" counts as "basic, regular maintenance". BMW makes a fantastic engine, then makes the water pump impeller out of plastic. For what feels like decades.

My E39 (which is a beacon of reliability for the brand) had a radiator neck made of a kind of plastic that becomes brittle with prolonged exposure to heat. It's a good thing there's no heat associated with the radiator. "Replace the entire radiator" was a ~70k mile maintenance task.

Yup, mine blew apart in traffic on H1 (Oahu) at 35MPH when the car (530i) was ~3 years old. I think it had maybe 35k miles on it.

Dealer offered to allow me to pay extra for an all aluminum since that’s what they recommended but the factory wouldn’t cover.


E39 beacon of reliability, is this sarcasm? I really can't tell

It's not, actually - the E39s are incredibly reliable compared to more recent models. I drove mine for 20 years.

That said, it's the difference between "fairly unreliable" and "spectacularly unreliable".


I miss mine enough I’ve been debating on buying another.

A 530i Sport with a manual (!!!) popped up near me for a song but I just can’t justify it.


I have the B58 which is fantastic and does come with an all metal, mechanical water pump which I thought would be a pleasant break, but my gasket still failed. BMW and water pumps, classic.

It boggles the mind how a $10 space heater has better overheating protection than this $4000 space heater.

To be fair, $10 space heater main purpose is to get hot, so makes sense there is protection from getting too hot.

It's not an engineering problem, it's a political problem. If you present this problem to any power engineer at Nvidia, they'd probably say something akin to "yeah, delivering 600W at 12V over a 12-prong connector is insane, up the voltage". The issue is that 12V has been the standard voltage for ages, and if you want to sell a product that requires a higher voltage, you first need to get the industry to produce PSUs that deliver a higher voltage.

Cables are hard! Back when I did EE work we tried to avoid cables as much as possible because they cause all sorts of annoyances

My 4090 connector melted too

This needs a name along the lines of The Red Ring of Death. What will we call it?

Black goo of doom?

150 deg? You can nearly bake a pizza with that.

The thing takes enormous power. Some people had trouble with the 4090s too but I haven’t and I run a shit ton of them.

All these incidents are from aftermarket cables.

The current within a single cable is something like 25 A according to der3uer's measurements. That's crazy for those thin cables. So if current distribution is so uneven then no cables will be completely safe IMHO.

The cable and the connector are standardized, and no evidence of noncompliance has been demonstrated yet. ¯\_(ツ)_/¯

Also note that the only first-party cable NVIDIA provides is an adapter for converting between four 8-pin PCIe power connectors and the 16-pin 12V2x6 power connector. They do not provide a 12V2x6 to 12V2x6 cable nor a 12VHPWR to 12V2x6 cable, you're left with your PSU manufacturer's or other third parties' cables in those scenarios (which would include the reporter's).

I will say that it's regrettable that even derbauer's analysis was basically "idk man looks good to me". He did use a cable from the manufacturer of his PSU however at least for his own testing, so if you'd consider that a first or second party product, there you go.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: