Hacker News new | comments | show | ask | jobs | submit login

The value n often stands for the input size measured in bits, but not always. As long as you make it clear what n means, there is nothing wrong with using O-notation for that n. O-notation is just a way to denote a set of functions. This notation is also commonly used in mathematics and physics, where it doesn't even have anything to do with algorithms.



Of course, but the OP is not asking about big-O-notation in mathematics in general. He is specifically asking about what it means when applied to time complexity of algorithms.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: