Right -- Big O notation is a useful formalization of a thinking process that in practice usually includes more information.
That is -- anytime you write a loop, you'd better think about what's inside it (and how long those things will take), and -- given the amount of data you might be sending through that loop -- if that's a problem or not.
I want someone to notice that they're making a remote call in a loop when it could be batched, cached, etc.. I don't want them to spend 2 days optimizing an algorithm tinkering with a bit of text that yes, is inefficient, but in practice is only going to take a few milliseconds anyway and isn't a hotspot.
If they can tell you in Big O notation why its bad for large inputs then great, but so long as they can tell you why that's what really matters.