Eh, you're technically correct but when used as a point of relative comparison to an O(n²) algorithm I think there's some merit to it, because it actually gives some sense of constant overhead within the context.
The best kind of correct. Calling it O(n) then is the best way to express the performance improvement - I think that's the whole point of Big-O notation.
However I agree that for smaller n one should not underestimate the constant factor impact.
Actually O(2n) did not convey this for me at all, all it did for me was cause confusion. If this ("turn it into two not nested loops") was indeed intended, I'd suggest to spell it out, not abuse notation
> If this ("turn it into two not nested loops") was indeed intended, I'd suggest to spell it out, not abuse notation
O(2n) is not an abuse of notation. It's just as well defined as O(n). The fact that O(2n) is a subset of O(n) is a theorem, not part of the definition of the notation.
Interesting, I hear people spelling out constants all the time when using the O notation to put emphasis where they want. I guess that's not really as universal as I thought, but I still think it's a good way to express.
It can cause more confusion though, if say you write O(kn), but the oh-notation hides a factor k^3, then it would be better to just write O(n). Or write O_k(n) to signify a hidden dependency on k. Or best of all, just write out k^4 n without any ohs at all.
What if the constant were massive? Then the coefficient might not be significant in practise. O(2n) might be worse than O(n+99999999), they both reduce to O(n) though. It is a classification.
Personally I would prefer to use different semantics to express that. It seems big O notation often carries a separate meaning in parlance.
I thought big O notation was about classifying function growth irrespective of the coefficient/constant. Does it not grow at all with respect to n? great you are O(1). Does it grow linearly? cool you are O(n). Logarithmically? Quadratically? In my mind, this kind of analysis aligns with big O notation.
If constant factors are significant in practice and the size of the input is not significant, then big O notation is not the right tool for the analysis, and that's perfectly fine. For instance, we generally don't talk about big O notation when comparing the performance of SSDs and spinning HDs.
Big O notation is for describing how the performance of an algorithm changes as the size of its input changes. If the size of the input is not a significant concern, then it's totally fine to not use big O notation for analyzing the problem.