They're not just annoying, they possibly don't get the point of big O. The n in O(n) means that at the time of algorithm design the size of the data is unknown and considered unbound. If you already know its upper limit then it becomes O(c) (where c is a known constant). If c is low enough (a discretionary decision) it can possibly be considered O(1). Think of how a theoretical hash map traverses one of its buckets in linear time to find the key's value (if the bucket is implemented as a linked lists), we still consider the overall lookup to be O(1) simply because the upper bound of the bucket is known.