The point there is that Big-O notation also implies that the growth has given shape above some large-ish n. And metrics like "number of operators" or "number of bussiness requirements" rarely are so large that this makes sense.
And in fact approaches to system design that try to make the code size independent of number of bussiness requirements and their possible changes in future lead to exactly the kind of "complexity worship" discussed in TFA (various ad-hoc turing complete VMs that interpret code represented as rows in bunch of relational tables and what not).
People who write "x is O(n)" usually actually mean to write "x ∝ n", but 1. ∝ is hard to type on a keyboard, and 2. even fewer people (including CS majors!) know what ∝ means, than know what O(n) means well-enough to allow them to figure out its non-jargon usage from context.
So you think that the imprecision of saying "O(n)" when the "large-ish" behavior is not relevant is worse than the verbosity of saying "scales directly/with the square of/not-at-all with [relevant constraint]"?
FWIW, Big-O itself, even in technical contexts, gets imprecise with e.g. calling hashtables O(1), which is not possible, even under the idealized computer model (instant memory access, etc).
Is there a shorter way of saying "scales proportionally with n" that you would suggest the tech community prefer because of its greater precision?
That's like saying there's no point in using pronouns, since you have to say the antecedent anyway.
It would be wrong in both cases because the context can make clear what a variable or pronoun refers to. If the problem context makes clear what the binding constraint is and you just need to talk about scaling behavior, then it is indeed shorter to say "O(1) rather than O(n)" vs "doesn't depend on the operations rather than being directly proportional".
>>what we ended up with was several classes for each operation.
>The second is shorter, and says no more than what is relevant.
It says less: the O notation is used to indicate that as you add more operations, you will need to add more classes, rather than only needing to add classes when there is logic the operations don't yet implement.
And in fact approaches to system design that try to make the code size independent of number of bussiness requirements and their possible changes in future lead to exactly the kind of "complexity worship" discussed in TFA (various ad-hoc turing complete VMs that interpret code represented as rows in bunch of relational tables and what not).