Hacker News new | comments | show | ask | jobs | submit login

> Thinking about Big O is part of being aware what your code actually does.

I don't get this at all. Surely Big-O is an abstract way of describing a property of your code to somebody who doesn't want to read it. If Big-O gives you a better understanding of what your code does than writing it did, that suggests you wrote it by applying some kind of genetic algorithm and you should probably go back and look at it again until you understand it.




Maybe I didn't express this very clearly. I argue that it's paramount to understand what your code actually does. As simple as this sounds, in my experience a lot of developers just don't care. I'm not talking about applying some abstract CS concepts, I'm talking about getting a feeling for the amount of computational effort and memory consumption your code causes.

Once you understand your code's behaviors, you understand its Big O. It's not necessary to be explicit about it, but this is one of the rare cases where theoretical notation and engineering intuition do map very nicely. If you can say things like "my code's CPU cycles go up linearly with variable n", that's the equivalent of understanding Big O.

So I'm not advocating blindly "learning" about Big O, I'm trying to say that once you get a good sense of what your code is doing you get Big O understanding automatically.

> Surely Big-O is an abstract way of describing a property of your code to somebody who doesn't want to read it.

It can be, but that's not useful. I think of it as a micro vocabulary that comes naturally from understanding code. It's really not an abstract concept, it's a very concrete attribute a section of code has.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: