> I became proficient in Rust in a week, and I picked up Svelte in a day. I’ve written a few shaders too! The code I’ve written is pristine. All those conversations about “should I learn X to be employed” are totally moot.
You did not and you are not proficient. LLMs and AI in general cater to your insecurities. An actual good human mentor will wipe the floor with your arrogance and you'll be better for it.
I think you're under the impression that I am not a software engineer. I already know C, and I've even shipped a very small, popular, security sensitive open source library in C, so I am certainly proficient enough to rewrite Python into Rust for performance purposes without hiring a Rust engineer or write shaders to help debug models in Blender.
My point is that LLMs make it 10x easier to adapt and transition to new languages, so whatever moat someone had by being a "Rust developer" is now significantly erased. Anyone with solid systems programming experience could switch from C/C++ to Rust with the help of an LLM and be proficient in a week or two's time. By proficient, I mean able to ship valuable features. Sure they'll have to leveraging an LLM to help smooth out understanding new features like borrow checking, but they'll surely be able to deliver given how already circumspect the Rust compiler is.
I agree fundamentals matter and good mentorship matters! However, good developers will be able to do a lot more diverse tasks which means more supply of talent across every language ecosystem.
For example, I don't feel compelled at all to hire a Svelte/Vue/React developer specifically anymore: any decent frontend developer can race forward with the help of an LLM.
I realize I came across as harsh and I surely don't want to judge you personally on your skills as A) that's not necessary for my point to make sense and B) uncalled for. I'm sure you are a capable C developer and I'm sorry for being an asshole - but I am one so it's hard for me to pretend otherwise...
Being able to program in C is something I can also do, but it sure as heck does not make me proficient Rust developer if I cobble some shit from a LLM together and call it a day.
I can appreciate how "businesses" think this is a valuable, but - and this is often forgotten by salaried developers - as I am not a business owner I have neither the position nor the intention of doing any "business". I am in a position to do "engineering". Business is for someone else to worry about. Shipping "valuable features" is not something I care about. Shipping working and correct features is something I worry about. Perhaps modern developers should call themselves business analysts or something if they wish to stop engineering.
LLMs are souped up Stack Overflows and I can't believe my ears if I hear a fellow developer say someone on Stack Overflow ported some of their code to Rust on request and that this feature of SO now makes them a proficient Rust developer because they can vaguely follow the code and can now "ship" valuable features.
This is like being able to vaguely follow Kant's Critique of Pure Reason, which is something any amateur can do, compared to being able to engage with it academically and rigorously. I deeply worry about the competence of the next generation - and thus my own safety - if they believe superficial understanding is equivalent to deep mastery.
Edit: interesting side note: I am writing this as a dyed in the wool generalist. Now ain't that something? I don't care if expertise dies off professionally, because I never was an "expert" in something. I always like using whatever works and all systems more or less feel equal to me yet I can also tell that this approach is deeply flawed. In many important ways deep mastery really matters and I was hoping the rest of society would keep that up and now they are all becoming generalists who don't know shit and it worries me..
People have 240hz monitors these days, you have a bit over 4ms to render a frame. If that 1ms can be eliminated or amortised over a few frames it's still a big deal, and that's assuming 1ms is the worst case scenario and not the best.
I don’t think you need to work in absolutes here. There are plenty of games that do not need to render at 240hz and are capable of handling pauses up to 1ms. There’s tons of games that are currently written in languages that have larger GC pauses than that.
> That’s like saying the workers have all the leverage.
...they do? You can shuffle money all you want, if nobody can write the fucking code then you don't have software. I imagine it works much the same in any other field.
No, what's toxic is building an environment like 99% of companies where juniors are told that everybody is the same and there's no point doing anything other than copy pasting whatever dogshit the "senior" next to them is typing into VSCode.
You come off as incredibly arrogant too, you just don't realise it because you have the current mainstream opinion and the safety of a crowd.
Do you know how fucking obnoxious it is when 200 people like you come into every thread to tell 10 C or Javascript developers that they can't be trusted with the languages and environments they've been using for decades? There are MILLIONS of successful projects across those two languages, far more than Rust or Typescript. Get a fucking grip.
Well, not 100% true... With Rust being the principal language that compiles to webassembly, I'm sure there is quite a lot of JS code being re-written in Rust right now.
But you'll be called things like "catastrophically unprofessional" [1] if you choose JavaScript over TypeScript, or dynamically-typed Python over MyPy.
The comments are absolutely astroturfed to fuck as well, but you're right, there's at least some small signal in there, whereas average Google results have an amount indistinguishable from zero.
This shit is so fucking dumb. Sorry for the unhinged rant, but it's ridiculous how bad every single connector involved with building a PC is in 2025.
I'm just a software guy, so maybe some hardware engineer can chime in (and I'd love to find out exactly what I'm missing and why it might be harder than it seems), but why on earth can everything not just be easily accessible and click nicely into place?
I'm paying multiple hundred dollars for most of these parts, and multiple thousands for some now that GPUs just get more and more expensive by the year, and the connector quality just gets worse and worse. How much more per unit can proper connectors possibly cost?
I still have to sit there stressing out because I have no idea if the PSU<->Mobo power connector is seated properly, I have no idea if the GPU 12VHPWR cable is seated properly, I'm tearing skin off my fingers trying to get the PSU side of the power cables in because they're all seated so closely together, have a microscopic amount of plastic to grip onto making it impossible to get any leverage, and need so much force to seat properly, again with no fucking click. I have no idea if any of the front panel pins are seated properly, I can't even reach half of them even in a full ATX case, fuck me if I want anything smaller, and no matter what order you assemble everything in, something is going to block off access to something else.
I'm sure if you work in a PC shop and deal with this 15 times a day you'll have strategies for dealing with it all, but most of us build a PC once every 3 years if that. It feels like as an average user you have zero chance to build any intuition about how any of it works, and it's infuriating that the hardware engineers seem to put in fuck all effort to help their customers assemble their expensive parts without breaking them, or in this case, having them catch fire because something is off by a millimetre.
Connectors are actually extremely difficult to make.
- you have to ensure that the metal connectors take shape and bond to the wire properly. This is done by crimping. Look up how much a good crimping tool costs for a rough approximation of how difficult it can be to get this right.
- one plastic bit has to mate with another plastic bit, mechanically. This needs to be easy enough for 99.99% of users to do easily, yet it needs to be 99.99% reliable, so that the two bits will not become separated, even partially. Even under thermal expansion.
- the electrical contacts inside must be mechanically mated over a large surface area so that current can pass from one connector to another.
- it must be intuitive for people to use. Ideally user pushes it and it clicks right in. No weird angles either, it could be behind a mechanical component that's tough to reach. Also, user has to be able to un-mate the connector from the same position. It should be tough for a user to accidentally plug in an ill suited connector into the wrong slot.
- has to cost peanuts. Nobody will pay $3 for a connector. Nobody will even want to pay $1 for a connector. BOM cost is 15-20% finished goods cost. Will the end user pay $8, $10, $12 for a good connector? No.
- repeatable to manufacture (on the board and on the cable) at high quality. User might take apart their PC a dozen times, to fix things, clean, etc for the lifetime of the component. So the quality bar is actually very high. Nothing can come loose or break off, not even internal parts.
- physically compact. PCB space is at an extreme premium.
- your connector design has to live across many product cycles, since people are going to be connecting old parts to new boards and they'll be upset if they can't do this. So this increases risk by a lot as redesigning a connector means breaking compatibility for existing users.
Connectors are actually a very very deep and interesting well.
I'm not surprised at all that they are running into issues here, these cards are pulling 500+ watts. That is a LOT of current.
I think next gen we will begin seeing 24V power supplies to deal with this.
They are not transformers, though. The coil/chokes are not galvanically isolated which makes them (more) efficient. Stepping down from 48V to 0.8V (with massive transient spikes) is generally way harder than doing it from 12V. So they may ended with multi step converters but that would mean more PC with more passives.
3.3V from 48V is a standard application for PoE. (12V intermediate is more common though.) The duty cycle does get a bit extreme. But yes, most step-down controllers can't cover both an 0.8V output voltage and 48-60V input voltage. (TI Webench gives me one - and only one - suggested circuit, using an LM5185. With an atrocious efficency estimate.)
You'd probably use an intermediate 12V rail especially since that means you just reuse the existing 0.8V regulator designs.
Aside the step down, the transients can be quite crazy, which might make the power consumption higher (due to load line) calibration. 48V fets would have much worse RDSon compared to lower voltage spec'd ones. So it does make sense no single smart power stage to have such transistors (presently).
There are other issues, too. 48V would fry the GPU for sure, 12V often time does not even with a single power stage failure.
In the end we are talking about a stupid design (seriously 6 conductors in parallel, no balancing, no positive preload, lag connectors, no crimping, no solder) and the attempted fix is a much more sophisticated PCB design and passives.
Your SMPS needs sub-2V output, cool. That means it only needs to accept small portions of the incoming.
But, if the incoming is 48V, it needs 48V tolerant parts. All your caps, inductor (optional typically), diodes, the SMPS itself.
Maybe there isn’t a sides difference in a 0603 50V capacitor and 10V 0603 capacitor, but there is a cost difference. And it definitely doesn’t get smaller just because.
Your traces at 48V likely need more space/separation or routing considerations that they would at 24V, but this should be a quickly resolved problem at your SMPS is likely right next to your connector.
Yes. And it also doesn’t need to handle 40+ AMPs on input, with associated large bus bars, large input wires, etc.
Extra insulation is likely only a mm or two, those other components are big and heavy, and have to be.
It’s the same reason inverters have been moving away from 12v to 48v. Larger currents require physically larger and heavier parts in a difficult to manage way. Larger voltages don’t start being problematic until either > 48v or >1000v (depending on the power band).
Voltage regulators. Voltage regulation technology is extremely advanced as even very small efficiency gains can save billions for hyperscalers. Unfortunately, I don't know of any specific products to share as power isn't my domain. I'm only familiar with the space because we sometimes have to pull telemetry directly from the VRs when doing system level RCAs. Some of our BMCs can do this directly via I2C.
There should be plenty. 48-54 VDC is the standard for OCP powershelf designs. Hyperscalers such as Google have been working for nearly two decades now to eliminate voltage conversion steps. When I left, the power plane within the server PCB ran at the busbar voltage, which could float up to 54VDC. Given this, I'd expect them to convert from 48-54VDC down to 3.3 directly or at most something like 5VDC and then use smaller VRs near components such as ram and cpu.
>> Connectors are actually extremely difficult to make.
While your points listed are valid, we have been making connectors that overcome these points for decades, in some cases approaching the century mark.
>> I'm not surprised at all that they are running into issues here, these cards are pulling 500+ watts. That is a LOT of current.
Nonsense. I used to work at an industrial power generation company. 500W is _nothing_. At 12VDC, that is 41.66A of current. A few, small, well made pins and wires can handle that. It should not be a big deal to overcome that. We have overcome that in cars (which undergo _extreme_ temperature and environmental changes, in mere minutes and hours, daily, for years), space stations (geez), appliances, and thousands of other industrial applications that you do not see (robots, cranes, elevators, equipment in fields and farmlands, equipment in mines, equipment misused by people)... and those systems fail less frequently than Nvidia connectors. But your comment would lead one to think that building a connector with twelve pins on it to handle a whopping (I am joking) 500W (not much, really, I have had connectors in equipment that needed to handle 1,000,000Watts of power, OUTDOORS, IN THE RAIN, and be taken apart and put back together DAILY) is an insurmountable task.
Those GPUs aren’t particularly cheap, even a $100 connector and cable wouldn’t be a huge deal breaker for a $2000-3000 device if it means it’s reliable and won’t start a fire (that’ll cost way more than $3100)
Yes cheap connectors exist and there is a marked for it, like everything "cheap". But to what point one wants to "defend" a trillion dollar company, on a product that was never marketed as "cheap", that actually comes with a hefty price tag, to skimp on something that is 0.01% of there BoM cost. If you sell for a premium price you should better make sure your product is premium.
Then what's the point of such an arbitrary comparison? It's normal that plenty of commodities that were expensive when new have been devalued by age and can cost less on the used market than the top of the line BRAND NEW cutting edge GPU today, which itself will be worthless in 10-20 years on the used market and so on.
Presumably, the point is that a working car is more complicated & cheaper (in this case) than the graphics card, while the graphics card can't figure out how to make a connector.
I read it as a kind of funny comment making a broader point (and a bit of a jab at nVidia), not a rigorous comparison. I think you might be taking it a bit more seriously than was intended.
An old legacy car is definitely not more complicated than designing and manufacturing a cutting edge silicon made for high performance compute.
The price difference is just the free market supply and demand at work.
People and businesses pay more for the latest Nvidia GPUs than for an old car because for their use case it's worth it, they can't get a better GPU from anywhere else because they're complex to design and manufacture en-masse and nobody else than Nvidia + TSMC can do it right now.
People pay less for an old beater car than for Nvidia GPUs, because it's not worth it, there's a lot better options out there in terms of cars and cars are interchangeable commodities easy and cheap to design and manufacture at scale at this point, but there's no better options easier to replace what Nvidia is selling.
Comparing a top GPU with old cars is like comparing apples to monkeys, it makes no sense that doesn't prove any point.
>An old legacy car is definitely not more complicated than designing and manufacturing a cutting edge silicon made for high performance compute.
A car is more complicated than a connector, at least.
Anyways, the rest of your comment is again taking a humorous one-liner way too seriously. Thanks for the econ lesson though, I guess. I liked the part where you explained to me the basics of supply and demand like I am in 5th grade.
They could use a common XT90 or something similar. You find high amperage connectors on all the RC lipo batteries and they are cheap enough, you find them on $100 products (batteries).
I regularly work with 100amp+ at 12v. It’s obvious the connector NVidia is using is atrocious and we all know it.
I know we're just ranting, and there are reasons for the seemingly bad designs. But I have a very recent 1200W Corsair (ATX 3.1/PCIe 5.1) which uses these special "type 5" mini connectors on the PSU side. It's painful to try and get your fingers between them to unclip a cable, and yesterday two of the clips broke off just trying to remove them. I ended up taking the whole PSU out just to make sure I didn't lose plastic clips into the PSU itself. It's fine now, but two of my cables will never latch again. Just, blah.
My first build used a Kingwin PSU from around 2007 which used "aircraft style" round connectors which easily plugged in then screwed down. It even had a ring of blue LEDs around the connectors. It was so cool and felt premium! Having that experience to compare to made the Corsair feel cheap despite being so much more powerful.
I work in power electronics and there are ample connectors that can handle any type of power requirement.
What is happening in the computer space is that everyone is clinging to an old style of doing things, likely because it is free and open, and suggestions to move to new connectors get saddled with proprietary licensing fees.
D-sub power connectors have been around forever (they even look like the '90s still) and would easily be able to power even future monster GPUs. They screw in for a strong connection too, but no reason you couldn't make ones that snap in too.[1]
Man would i prefer screw in. I hate snap. All of those things in motherboards that require serious force and if you don't know what you're doing it's quite easy to not realize the reason something isn't going in is because of a block/issue, rather than not enough force. So the user adds more force and bam, something breaks.
Then of course there's just so much force in general it's easy for a finger/hand to slip and bump/hurt something else, etcetc.
I tend not to enjoy PC building because of that. Screws on everything would be so nice imo. Doubly so if we could increase the damn motherboard size to support these insane GPU sizes lol.
Would it be any less safe than a Molex connector? They sometimes still come with brand new PSUs for compatibility. They have 12 volt pins too (yellow wire) if I remember correctly that can be very loose. Back when they were more standard, I'd seen sparks go off after they touched a case's chassis, as a cable to the PSU could have multiple unused/unplugged Molex connectors on it just hanging somewhere. The older PSUs I've used never came with full covers for them, so wrapping them in electrical tape was the "fix".
Not a hardware guy, but I wonder if that's a factor in connector choice. Basically, if a significant fraction of PC building is done by teens or young adults building their gaming rig in their living room, with neither formal training nor oversight, do designers have to make sure this is "teenage proof"?
On the contrary, a system like this would most certainly be designed such that the PSU outlet is female, the GPU inlet is male, and you'd use a male to female power cable. This way, a cable plugged only into a device leaves exposed but dead pins on the other end, and a cable plugged only into a PSU leaves non-exposed pins on the other end.
Just like UPSes have C13 outlets, ATX PSUs have C14 inlets, and you plug a desktop PC into a UPS with a C14 to C13 cable.
My favorite is these shitty RGB connectors. They were obviously very recently decided on, yet somehow what we got is something without any positive retention or determined orientation yet still obnoxiously big.
What's wrong with the 4/6/8 pin plugs? I find them perfectly good. And they have a high power variant that would have worked much better here, rated for twice the current per pin.
They're the best of the bunch when it comes to PC parts, but think how far off they are in terms of usability compared to USB, or Ethernet, or HDMI, or Displayport, or those old VGA cables you had to screw in, or literally anything else. They only look good in comparison to the other power connectors.
> They're the best of the bunch when it comes to PC parts
Not really, the PSU side isn't standardized at all and it's not obvious at all because the cables will happily fit when you plug cables from PSU A into PSU B and fry your entire build.
Theres no benefit to not having standards on that side, and the other side is all standard so they are able to follow standards there, "It's just the way it's always been" so they keep doing it
USB, especially USB C, is very much designed to carry power. Not quite as much as high end graphics cards guzzle these days but it goes up to 240W. Ethernet, HDMI, DP and even VGA (with extensiosn) are also all used to carry power even if much smaller currents.
It's designed for 5 amps. In this context, that's close enough to "not carrying power".
If we're considering the bigger voltages that allow higher power on USB C, then the existing GPU plugs are fine because we can deliver 600W using the same current as a 150W 8 pin plug.
They want to use the Molex for some reason. That's what doesn't make sense. They could just like, give it two ring connectors and let gamers screw them on. Bigger ones of rings take 50A(*12V = 600W) just fine.
I'm suspecting it's really less than half a dozen people at NVIDIA, like guys in purchasing division or PCB designers not wanting to make a drastic parts/footprint change. M8 SMD lug terminal in a gamer accessory is crazy, but not rocket science.
That's what I wondered, you may not understand all the players. I believe the PCI standard specifies this Molex connector. Somewhere between what Nvidia ships and the power supply itself, that standard is the only common connection.
No, NVIDIA's use of the connector and first reports of melting predates the spec. They were never had hands tied to use it.
Gaming GPUs are having sagging problems for years too, and little is done to solve it. The cards are bending in their own weight. They're not products of proper engineering.
Good connectors are expensive. All-plastic connectors like these are extremely cheap. Here's an example of a connector style as used in internal PC power cabling:
This is $0.20/ea in bulk from a distributor, after import into the US, and after distributor markup. Probably $0.10-0.15 or even less at he scale board manufacturers are working at. You have 4 of these kind of connectors in your system (1 on the GPU, one on the power supply, and two on the cable). So still <$1 total in volume.
A quality d-sub power connector that has a metal housing and screws in place is going to be about $10/each. That's $40 just in connector parts, just to power your GPU, not including every other power cable in the unit, and not including all the herding cats you need to do to get the entire PC industry to shift over to using your new connector.
So, yes, you could do this, but you'd probably double the cost of a PC power supply (if all connectors used were upgraded to the same standard) and increase the cost of every GPU by $100-200, minimum.
People are already complaining that modern GPUs cost too much, so businesses making parts have assessed that it hasn't been worth it to spend this kind of money on connectors. Now, this may change at 600+W... clearly something has to change soon, as we're seeing the limits of what the existing standards can do.
If you increased the cost of the GPU by the upper end of your estimate ($200), that's a 10% increase of the new top end GPU (MSRP $2000 for a RTX 5090). That seems significant... until you realize that that 10% is what would prevent that $2000 GPU from turning into a ruined $0 brick when the connector inevitably melts. All of a sudden, that 10% increase seems like a bargain.
Even a middle school teacher will tell you put large amount of current over a wire is an bad idea though. Remember P=I2R? It should be in the first few class that you learn about electricity.
And nvidia engineers decides to put current originally carried by 24wire (or even 32) into a 12wire connector without change the connector size. Wow it's so surprising that it would burn.
I just don't understand how the f*k the whole thing get approved at first place. It's just insane.
power requirements of GPU cards are increasing with each generation and pushing power to them become more difficult. Electricity through wire causes heat. More power = more heat resulting in things melting. Even the cable would melt(or explode) if high enough current runs through it. People here are talking about 48 volts instead of 12 volts which is one solution. But more cabling to distribute the current would be easier.
fucking lmao