That remind me of a meeting a few days ago where we were discussing if we should add another server or spend a bit more time developing something. After some time, I asked about the server cost, and it ended up being something like 100€ a month more to not have to worry about performance for the next year. In that case the decision was easy enough, but maybe some organisations have fixed server budgets and developers. For those, being able to trade developer time for performance, even if it seems inefficient, may be better than nothing.
Yes, 100% it depends on many factors. 100€ euro a month = 1200€ euro a year... Now if a developer can address that in 15 minutes- wonderful, but if it takes her a week... have you won very much, because it's not just the time calculation but it's the other would this same developer could have been doing in this same time.
Moreover, you're right to calculate this per year because maybe you'll throw this entire code away in a year, or maybe it will become the key code for something else, and you'll need to optimize it anyway.
Even sometimes having a meeting to discuss the performance is not cost effective. Six employees around a table for an hour discussing whether or not to improve the performance of 1200€ a year is not cost efficient.
But then again, sometimes it really is. In one environment we were asked to use some tooling on top of the OS that reduced performance by 5%. It was some management toy that would have made the Linux boxes do the same thing as the Windows boxes, but there was a performance penalty of 5%...
5% of 1000 machines is 50 machines, or just over one rack of servers, and so we could go back to management and say "The performance penalty of this is over a rack of machines. Are you sure you want this?" and then they can decide if it's worth it or not.
Server upgrade costs can vary wildly depending on what it entails. The cost of adjusting your EC2 Auto-Scaling Group maximum instance count will be very different than contacting HPE sales department.
At this point it seems that only mainframes can get more expensive than cloud auto-scaling.
But yeah, some places have a very expensive acquisition procedure. To the point that the labor in buying expensive things costs more than the thing itself. But those places "solve" the problem by buying large batches, and the hardware x developer costs comparison don't change much.