I mostly agree. big-O knowledge helps you pick the right data structures and algorithms for the job. The idea is that it will often help you avoid having to rediscover all those complexity classes by trial and error when looking at systemtap traces or time stamps in the log stream.
For example in real-time systems picking a btree vs a hash based data-structure might work better sometimes since there is a less of a chance of a sudden spike related to hash re-sizing, instead there is a small penalty to be paid during insertion. I believe that. Have I actually measured that? No. Because it would involve re-writing a bunch of code and it would take time. So I don't know if big-O had saved my ass here.
That is just one example.
Or say when when it comes to large data storage, knowing the base data structure used in the database will give you some expectation as to how it behaves when the size grows.
All that said, it is hard to point back in 7+ years and say, aha, I know exactly how many times knowing big-O saved me from spending extra time and effort debugging. I can think maybe only of one or two times recently when I had to think about big-O so I mostly agree with you.
It certainly seems that not knowing anything about big-O will not terribly handicap someone who knows how to use debugging and profiling tools. There are probably other more practical bits of knowledge that are more important to know.
Despite this these kind of questions are very popular. I see a few reasons. 1) "Big Company" interviews. Big companies love hiring fresh college grads from good schools. Those don't have a lot of relevant software development experience. But they have to be selected and tested somehow so theoretical CS is the goto tool. 2) Other companies just copy the interview questions from big company interviews thinking "well they are so big and successful because they are using these kind of questions to select candidates". Whether it is true or not, I don't know but I believe that processes goes on behind the scenes.