I'm also mainly a web developer. In my head, I'm thinking about Big-O a lot - not that I'd go and formally examine every bit of code, but keeping O behavior in mind leads to certain almost subconscious decisions that are going to be good for your software.
Being Big-O-aware makes sense especially in web development where latency minimization and stable scaling behavior are important. Keeping Big O in mind influences a lot of choices, from using certain language constructs to the way your database tables are modeled.
Thinking about Big O is part of being aware what your code actually does.
I don't get this at all. Surely Big-O is an abstract way of describing a property of your code to somebody who doesn't want to read it. If Big-O gives you a better understanding of what your code does than writing it did, that suggests you wrote it by applying some kind of genetic algorithm and you should probably go back and look at it again until you understand it.
Once you understand your code's behaviors, you understand its Big O. It's not necessary to be explicit about it, but this is one of the rare cases where theoretical notation and engineering intuition do map very nicely. If you can say things like "my code's CPU cycles go up linearly with variable n", that's the equivalent of understanding Big O.
So I'm not advocating blindly "learning" about Big O, I'm trying to say that once you get a good sense of what your code is doing you get Big O understanding automatically.
> Surely Big-O is an abstract way of describing a property of your code to somebody who doesn't want to read it.
It can be, but that's not useful. I think of it as a micro vocabulary that comes naturally from understanding code. It's really not an abstract concept, it's a very concrete attribute a section of code has.