A part of the problem is, no user ever look at codebases. Unlike you da a PCB, make an electrical machine, build a bridge, people see the result. In software people see the UI. Also because copying is free, in PCB for ex. every jumper wire cost you, in software they are for free. So why fixing things truely.
Copying isn't free though. Every front-end application built on Electron or running in a container that duplicates half of the OS stack burns more memory on my computer, forcing earlier upgrades of either the entire machine or RAM with each year that passes.
It's free to produce, but the cost of running more copied software always increases.
I think there are a number of per-unit costs of software, because your edge cases get tested more frequently. The risk goes up for users finding bugs and incompatible working environments. They demand more unusual features. If successful, your growing user base attracts the attention of competitors.
If coupled with hardware, the rising complexity of software makes the hardware more costly to develop and sustain. It may delay the introduction of newer or more valuable hardware features and products.
When consumer software still came on physical media, nearly everything advertised the amount of ram and minimum system and cpu requirements necessary to run the software.
That just isn't the case nowadays, so you don't know there will be trouble until it occurs at runtime.
I can limit the memory available to ALL the tools I use every day (I only do it for docker and jvm stuff), set priority levels, etc., but it feels very weird for a brand new laptop to hit swap after a few days of uptime.