We’ve  been advocating strongly for using performance budgets  as a means of protecting hard-earned performance improvements. There’s some depressing stat around performance regressions... something like 25-50% of big sites regress in performance 6 months after a big push to optimize. Don’t quote me on the specific number, I believe it’s mentioned in .
: Google Web DevRel
Every feature request is adding more logic to the code, and it's not uncommon for the business to ask for code that affects every single path.
I've noticed this before, but it's very true on this project: management is very hesitant to create new 'areas' of the application, and so more and more functionality shows up on the common paths through the code, with all of the attendant uptick in computational complexity that goes with it.
Long ago I had a job where they couldn't understand why the whole site had gotten slower. Well, apparently when we told them that putting an expensive call in the header on every view, that everything was going to get slower, they didn't believe us. Nobody needs that kind of data, the ones who do don't need it to be accurate to the millisecond, and every penny you spend on information the average user is unaffected by is wasted money.
There is no big picture. There is only hill climbing and getting stuck on local maxima constantly. And no I'm not bitter, why do you ask?
This really nicely describes why I think so much software becomes bloated. You see this with startups all the time--at first the product is hacky/clunky/lacking features, then after some time and love it does a small set of things really, really well and is truly a joy to use, and then later on it becomes slow, bloated, and trying to do far too much and ends up doing many things unremarkablely rather than a few things well (the opposite of "half, not half-assed").
I think that last stage happens when you have individual teams A/B testing new changes independently and hacking on each new piece without much thought to the overarching cohesiveness of the product. All the incentives naturally align toward this too, so it takes some really strong leadership to prevent it and have a broader product vision that supercedes all the local optima of each individual team's features.
(As a side note, this also often happens when the software/startup needs to be monetized)
If you're constantly ensuring that performance doesn't regress, this makes certain features a lot harder to implement. In some cases, the better solution is not to "make this feature take a lot less of the budget", but to say, "are there other parts of the budget that are easier to cut than the current feature".
Having regular performance optimization sprints allows for us to "cut where it's easiest" rather than forcing us to restrict non-performant features, when they might be very useful for the business.
Performance is improved by removing a layer of abstraction.
Lather, rinse, repeat.