"Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified."
Above the famous quote:
"The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny- wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies."
All this from "Structured Programming with Goto Statements", which is an advocacy piece for optimization. And as I've written before, we as an industry typically squander many orders of magnitude of performance. An iPhone has significantly more CPU horsepower than a Cray 1 supercomputer, yet we actually think it's OK that programs have problems when their data-sets increase to over a hundred small entries (notes/tasks/etc.).
Anyway, I write a lot more about this in my upcoming book: "iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift"
Admittedly, this was probably related to misconfiguration and database issues (i.e. having zero oversight or administrative maintenance of the underlying MS SQL Server). That specific local minimum might not apply to the context of the article (optimization in code and systems design).
For instance, I find test cases that spin up in-memory, in-process Spark extremely slow, but the spin up is quite fast overall in the context of a job that processes gigabytes of data per task.