Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is software becoming less efficient?
25 points by miduil on Dec 23, 2022 | hide | past | favorite | 23 comments
I had a discussion with a friend where they stated that in-efficient software is bad because it ultimately yields to human labor-exploitation in order to fulfill our demand of ever faster compute power.

I think I agree with the issue, but also I believe things like vendors are able to get away with only few years of software support, proprietary OEM drivers that eventually become EOL and give hardware an expiration date - things that aren't exactly because of less efficiency.

So this kinda brings me to my question, is software actually becoming less efficient?

We have better image/video/audio codes, better multi-core programming languages, better efficiency in various high-level programming languages, (sometimes) better optimized libraries, better tooling that allow development of more efficient software and also pre-trained ML technology that uses much less storage/compute than some custom crafted software would have allowed.




Yes and no. As you said, in some areas we have wonderfully efficient codes that can take advantage of hardware for things like codecs. Efficiency through high level languages - that's debatable. Often they trade developer efficiency for efficiency of the computer - layers of abstractions to hide platform details, runtime layers to defer some things to runtime, etc.

I rarely see people using new tools to develop code that itself is more efficient - based on the ever growing resource needs of programs that rarely have corresponding growth in capabilities, I think almost all of the effort from the developer community as been on making developers feel more efficient at the cost of compute resources.

Personally I'd love to see people focus on efficiency even if it takes more time and effort. Unfortunately, the incentive just isn't there - developers focus on what make it possible for them grind out new code faster to the largest audience possible. Hence the layers upon layers of runtimes and abstractions that make that possible. If it means burning CPU and memory, they really don't seem to care. Hence the relentless consumption of more and more resources by software.

I'm jaded : performance analysis was a long term research area for me, so I pay a bit more attention to these kinds of issues than your average JS or Python jockey who thinks the computer is a magical container full of infinite RAM and compute resources that are theirs and theirs alone to consume.


I don't really think the performance hit is that bad, except in odd cases like embedding a browser in an Android app that has to be opened when you load it.

It doesn't matter if you have 200 layers of function call on a button click handler, it's going to run in 1ms compared to 200ms to even think about clicking it. Some stuff doesn't matter.

However, the code to render the page full of buttons and scroll it smoothly could be performance critical.

A one size fits all framework can afford to put tons of effort into optimizing the heavy operations.

Handwritten code might not even use GPU acceleration, because there's no reuse and nobody thought it was worth it to spend months on it.


There's bad performance hits coming from somewhere.

Very noticeable if you are like me and run an older smartphone (Galaxy S5, 2014).


There was definitely bad performance hits on older phones after about a year. Not exactly sure what causes that.

Maybe something in the older Androids, maybe performance of the hardware has just increased so much that it finally caught up with software.

Older phones should be fast, just like old video games, but they haven't optimized that much, they just stopped making stuff slower and started mostly incremental gains and stuff that requires new GPUS.


Thanks for the thoughtful response, I wonder if there is any sort of actual research on that topic (like something that looks at the rate/direction/trajectory of performance) - although I understand that's very hard to reliably measure.


I've read somewhere: When people move to their work, they have a fixed time budget, not a fixed distance budget. If they have access to faster transport, they just find jobs further from home.

Software efficiency seems the same: People have some fixed tolerance of time and buggyness for a task. When hardware gets faster and more stable, people just tolerate more bloat and bugs. Companies only fix this when complaints get bad enough, so the incentive to do better just disappears.

This sort of fixed ratio while the tech below changes is everywhere. My original PC had a 200mb hard disk, and win31 took 20mb, so 10%. . My work laptop has a 512GB SSD, and Win10 took about 50GB, also 10% . Win31 was heavily optimized to fit in RAM and disk, Win10 is a bloated slug. It will stay a bloated slug until some shortage or whatever causes Microsoft to clean up its mess. This already happened e.g. with the EEE Pc that couldn't run Win Vista. So they kept XP alive and optimized Win7 some more.

Enterprise has the same limits. A company had a mainframe with 1Mb RAM. End of year batches ran in about 3 days because they had to. Today 5 decades later, the COBOL code has more than 10 000 times as much RAM, and still end of year closure takes 2 or 3 days. If it would be slower, they'd optimize or they can't fulfill obligations. But faster makes no sence, they are closed at end of year for 3 days. So optimization happens when requirements force it, and bloat happens when it gets a chance to grow unchecked.

All of this is also why we should not worry too much about governement forcing companies to become more eco or repair friendly. The company will scream it is impossible, and when forced it will remove some bloat and carry on.


In programming, as in life, we balance requirements, optimize for some things, and compromise on others.

I will assume that when you say "effeciency" you mean "clock cycles, and ram required, to perform a process".

So let's start by saying that probably all code could be improved to go faster and/or use less ram. Given enough time most things can be "improved".

But there's a price to be paid. Most code starts as "easy to read", but least performant. Performance is improved, usually at the cost of readability, until its "fast enough".

Readability impacts future maintainability, code thats hard to read may contain bugs (especially for edge cases) and may introduce security flaws.

I've worked on libraries, making them highly performant, but the code inevitably becomes more opaque. For a library the trade off is worth it.

For that sales report, that runs once a month, and takes 10 minutes, but could likely easily be optimised to run in 2 minutes, the trade off is less obvious. Keeping the report easy-to-maintain is a valuable long-term benefit.

Computers are not the only clock-cycles to measure. Developers also have limited time, and time spent doing one thing is time not spent elsewhere. Sure I can spend my time making something happen in 1 tenth of a second, instead of 2,but if the difference is not human perceptible, what's the point?

Incidentally this question is usually paired with "is software more bloated", and it is, primarily because there are more users who want to do more things. Hard drive space is cheap. Making programs "small" is very expensive.

So yeah, compromises. In time, space and money.


> For that sales report, that runs once a month, and takes 10 minutes, but could likely easily be optimised to run in 2 minutes, the trade off is less obvious. Keeping the report easy-to-maintain is a valuable long-term benefit.

I did this before, actually, and it was worth it for the client wondering why their report took 30 minutes to generate. The particular instance I'm thinking of was a poorly written SQL proc, which I improved upon to reduce the report generating time down to <10 seconds, and it took a couple of days of work to rewrite the original developers work. If I were a better SQL developer that would have been much less.

Another case of optimization required much less effort, I literally went in and switched the library used to generate a PDF from HTML and improved performance 100x. I don't remember the original tool used, but wkhtmltopdf was probably the better performing one.


> software actually becoming less efficient?

I'd argue that software is becoming worse in some ways. It seems that the acceptable tolerance is _very_ low these days, with most people are just shrugging issues off.

Personally, I find this talk by Jonathan Blow to be rather inspiring:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

As for software becoming less efficient specifically, I'd say that it is, for example, relying on microservices for everything is probably not a good idea for most companies. Not everyone is AWS, nor do they need to be.


What I’d like to know is how much energy is burnt by corporate mandated software running on workstations.

My work computer regularly spends hours heating my room running god knows what scanning and inventory software when I’m not using it for anything. Unless of course one of the desktop people have managed to deploy coin mining software across the fleet. At least then someone is getting benefit beyond room heating.


Gas expands to fill the volume of the container -> Tragedy of the Commons.

Software developers get lazier, stop profiling, and aren't incentivized to produce efficient code... we're usually incentivized to produce code that works and stop at that.


If you're only thinking of performance as "profiling", you're missing the architecture step. Why make something faster if you can skip it altogether.

It's rare when performance issues happen at the line level, unless you're doing CPU intensive tight loops. I can't remember the last time I had to optimize for that, but maybe I'm just not in the right field.

However what I constantly have to optimize for is IO and latency. And for that, all you can do is a proper architecture.


The tools we have are generally quite efficient. When someone puts thought and care into building software, it can be efficient.

There's two problems: computers today have such a staggering amount of resources that being mindful of performance is no longer baked into the programming mindset. Then it simply isn't profitable to spend an extra month taking your program from "usable" to "performant".

More programmers need to try building something on a tiny AVR with 4kbit of RAM. It's more fun than you'd think.


> tiny AVR with 4kbit of RAM. It's more fun than you'd think.

It’s also often more performant than you’d think.

Even tiny microcontrollers so insane rates of calculations these days. I know these are much larger but I have trouble utilizing the full power of cortex M0 or M3. Even PIC8/16 can be tiny little beasts in the right applications.

Point is, some of these aren’t even slow enough to show you why you’d care about efficient software code.


You have to be efficient in both program and memory space on AVRs, more than CPU time.


Software often gets used for something besides its intended purpose. Someone writes a simple library to solve a simple problem. Efficiency is not an issue since the problem is simple and the library is rarely used. The code is slow but it works so no one cares.

But because the code actually works, the library gets shared with other programmers. Eventually someone uses it for something that has a lot of data and is run frequently. Now the inefficient code becomes a real problem. Multiply that with a dozen libraries with similar characteristics and you begin to understand the issue.


“Software is getting slower more rapidly than hardware is becoming faster” is Wirth’s Law, coined 25 years ago.

When I started programming, the biggest problem was the 8K of RAM I had available. This year one of my programs crashed after exhausting 500GB.

The programs do more: they crunch more data, paint more pixels, and animate the display in more whimsical ways. But there’s little to force coders to be more efficient, so they spend their time on other things.


> The programs do more: they crunch more data, paint more pixels, and animate the display in more whimsical ways.

More data, i agree. Painting pixels seems to be harder than 30 years ago. And animation hasn't evolved from 20 years.

But, the same programs needs now 20 libraries instead of 10, needs 3 build systems to build and 10 daemons to run to be able to start.


That’s in line with my point: if, say, a program could only use 64 libraries, then people would find ways to remove the ones they don’t need. But there’s no pressure to reduce, so transitive dependencies balloon.


A large amount of modern is software is bloated. Most websites are filled with dozens of tracking an marketing (anti-) features, Drm in games, ORMs and query-languages that require another parsing and resolving layer in backends, let alone network latency, and so on… Software today does much more than in the past, yet most of what it does is worthless for the user.


Nope, not in any way that is subjectively noticable. I just started using Google keep and the snappiness is amazing. Android 12 seems slightly faster than 11.

We are finally optimizing again.

Unfortunately some stuff still lags behind, but it's not like the industry forgot how to write fast software.


Throwing out paradigms because they are complex (but flexible) in favor of paradigms that are simplistic and easier to understand. A good example is Object Relational Models.


There are a few things to parse here.

1. In terms of software efficiency, engineers may lament the perceived waste, inefficiency, and imperfection in the produced code, but from a business standpoint it is a rational cost/benefit decision. It is useful to view software through the lens of economics. In economics there is a concept that Labor (e.g. a software engineer) and Capital (e.g. servers, infrastructure) are substitutable. Many sub-optimal programs and systems built with reduced labor cost are perfectly usable by substituting more hardware. Optimization only makes sense where there is a clear benefit that exceeds the cost.

Thus, as a contrived or extreme example, would a manager spend $200k in labor to produce a highly optimized program, hand-crafted in assembly, or spend $500 in labor to produce a program in a higher level language such as Java, that does the same thing but uses more compute resources? The spread in cost between those two choices allows one to throw a lot of hardware at the sub-optimal program. Thus it is frequently a better business decision to produce inefficient software and throw more hardware at it. It may make the engineer feel bad, but what they wish to optimize is not aligned with what the business wishes or needs to optimize.

2. In terms of the short 'shelf-life' of software, the same problem infects hardware, consumer electronics, and other products. I've purchased a number of IPads for my family over the years. After a few years and IOS versions, more and more apps stop being compatible, until it becomes effectively useless even though the hardware is the same as when I bought it.

Again let's view this through the lens of economics. A cynic will look at the IPad situation and and think 'What better way to separate me from my money than to force me to buy a new product every few years, solely by software shenanigans?' Of course businesses enjoy selling more product, but they also have cost constraints in order to be viable (ignoring those who are perhaps making 'obscene profits' before competitors take notice, as my econ professor used to shout so passionately).

We might consider as an alternative that it is simply too expensive to maintain many versions of an app, on multiple platforms, with backwards compatibility and security concerns. The business instead is making a rational decision to only support their application on the OS versions and platforms that the majority of their customers are running at any point in time, similar to how a web developer at some point has to stop bothering to ensure their site works in IE 5.0.

None of that reduces my frustration at planned obsolescence but maybe this is just the reality of things.

3. I'm on the fence about the labor-exploitation part: this seems like a different and very complex issue. Some may argue that more hardware manufacturing provides good jobs without extended education or training requirements, while others may argue that those jobs are exploitative because the working conditions are poorly regulated or the position does not pay enough by their standard.

At a macro level, global poverty levels have significantly decreased over the past 30 years [1], so humanity seems to be doing something right. An optimist may say that as regions of the world move out of poverty, the regulatory environment will inevitably follow to reduce abuse, pollution, and safety risks. Time will tell but it requires patience - human systems are slow to change, in contrast to software and hardware.

[1] https://blogs.worldbank.org/opendata/april-2022-global-pover...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: