The function runs in linear time relative to the value of x. However, time complexity refers to the size of the input, not the value. For example, binary search's O(log n) time complexity refers to the number of elements n in a list(n varies with the list's size). x + 1 will always increase the value of x, but not necessarily the size. The size of x is the number of digits in x. f(x) runs once if x=1, ten times if x=10, one hundred times if x=100, etc. So each time x's size increases by 1, f's runtime increases by a factor of 10.It's a very literal definition of time complexity and it's counterintuitive, but ultimately it does make sense.

 [deleted]
 It's the difference between a function taking a list and a function taking a number.A list's size depends on the number of elements in the list. Adding an element to a list increases the size of the list.A number's size depends on the number of digits required to express the number. Number++ will always increase the numbers value, but not necessarily the number's size. anonymouz's point is that we should describe f(x)'s runtime in terms of the size of x, not the value of x. While I agree that the natural interpretation is to use x's value when x is a number, and x's size is going to be fixed anyway, f(x) is exponential in relation to x's size and linear in relation to x's value.Put another way: 4 and 5 have different values, but the same size: we're expressing both with 1 digit. 4 and 10 have different values and different sizes, as we require 2 digits to express 10 and only 1 to express 4. Of course we'd probably be dealing with binary, but it's the same idea.It's really a question of semantics/how you interpret OP's example and the natural interpretation leads to the function being O(n). However anonymouz makes an interesting point that I hadn't considered before.
 But the size of the number is such a programming-centered view of things. In my opinion it doesn't at all reflect the mathematics behind it. It's completely dependent on the representation of the number, which could really result in anything.The only thing that really matters is the size of the set of numbers that you do your computation on.What's really going on here (not in code but conceptually) is that you have two steps. First you take a number `n` and basically convert it into a generator for the set of numbers `{0, ..., n-1}`. We can assume that to be a O(1) step. Then in the second step you apply a O(1) function for every number in the set which is clearly O(n). So you end up with O(1) + O(n) which boils down to O(n).So we have functions`g(n) -> generator for {0, ..., n-1}``f(x, funct) -> funct(n) for n in x`and we combine them into`p(n) -> f(g(n), print)`Clearly, the input size of the arguments to `p` is no longer meaningful for the total runtime. You have to assume it is a set of numbers to make any sense.So in my opinion while anonymouz is technically correct, it does not make any sense to see it that way. BigO notation/analysis is there to gain some understanding about your algorithm complexity and not to link the amount of digits in a number to the complexity of your algorithm.
 >So in my opinion while anonymouz is technically correct, it does not make any sense to see it that way.I absolutely agree, and anonymouz's approach is not how I would approach time complexity. I thought it was an interesting point and so I decided to explain where he was coming from, but practically I don't think it makes any sense.
 Consider a function that prints a list of the numbers 1 through N. This function is linear in the size of its input, because we're talking about the size of a list.But now, make a function that prints the first N numbers. This function is exponential in the size of its input, because we're talking about the size of a number.We can optimize this second function by replacing our binary representation of a number with a unary ("tallying") representation. Now the size of a number grows linearly with its value, and so our function is linear instead of exponential. Nice win!
 anonymouz's interpretation is bizarre and definitely not how I'd think about the function, I'm just trying to explain where he's coming from.I don't plan on using the interpretation in the future but it was confusing to me too and I thought it was an interesting thought so I figured I'd attempt to explain.
 > anonymouz's interpretation is bizarre and definitely not how I'd think about the function.It is usually understood that one measures the time complexity of an algorithm in the size of the input (only if the input is an integer, would it even make sense to measure it in the value of the input). So you may of course use whatever definition you like, but be careful that statements about the time complexity of certain things are often made without referring to the exact definitions used, and you may then misunderstand them.For example, when people say "We don't know a polynomial-time algorithm to factor integers" they mean polynomial time in the size of the input (i.e., number of digits/bits of the integer). The problem would trivially be polynomial time if you allowed it to be polynomial in the integer itself.
 Sorry, I deleted my post before I saw your response.

Search: