Hacker Newsnew | comments | show | ask | jobs | submit login

But the size of the number is such a programming-centered view of things. In my opinion it doesn't at all reflect the mathematics behind it. It's completely dependent on the representation of the number, which could really result in anything.

The only thing that really matters is the size of the set of numbers that you do your computation on.

What's really going on here (not in code but conceptually) is that you have two steps. First you take a number `n` and basically convert it into a generator for the set of numbers `{0, ..., n-1}`. We can assume that to be a O(1) step. Then in the second step you apply a O(1) function for every number in the set which is clearly O(n). So you end up with O(1) + O(n) which boils down to O(n).

So we have functions

`g(n) -> generator for {0, ..., n-1}`

`f(x, funct) -> funct(n) for n in x`

and we combine them into

`p(n) -> f(g(n), print)`

Clearly, the input size of the arguments to `p` is no longer meaningful for the total runtime. You have to assume it is a set of numbers to make any sense.

So in my opinion while anonymouz is technically correct, it does not make any sense to see it that way. BigO notation/analysis is there to gain some understanding about your algorithm complexity and not to link the amount of digits in a number to the complexity of your algorithm.




>So in my opinion while anonymouz is technically correct, it does not make any sense to see it that way.

I absolutely agree, and anonymouz's approach is not how I would approach time complexity. I thought it was an interesting point and so I decided to explain where he was coming from, but practically I don't think it makes any sense.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: