I think the first answer's intuition is actually better. It's stated a little opaquely, so maybe it's not actually useful as a teaching tool until you already understand why binary-search type algorithms that rule out a significant fraction of the input at each step are O(log n).
Basically, phrasing big-O as a property of the algorithm rather than the tree-structure of the solution space is more helpful in practice. I think it's quite common to have some problem in front of you and say "OK, so I can see immediately there's a naive solution thats linear, is there a smarter way?" And the answer nearly always comes (for me at least) when I try to answer the question, "Can I find some way to make smarter decisions at each step?" If the answer is "yes" or "maybe", you might be able to get to O(log n), but quite often the answer is "no" because you're streaming random data or something, or you have some logical reason why you have to look at every single input object, and you can immediately rule out better solutions and just get to work on the linear one.
That insight rarely comes to me when I ask "Is there any way to think of the input as a balanced binary search tree?" If there actually is such a tree structure, you're bound to find it anyways by asking "What smart decision am I making at each level?"
I agree that I rarely find myself asking whether I can represent input as a binary search tree, I just find the binary search tree image a very effective method of demonstrating how logarithms play into this at all. It's an easy example of log n complexity.
That being said I'm not sure how much my speculation is worth here. I'm very comfortable with the idea of log n based complexity and don't remember how it was taught to me in the first place. Given that OP accepted the answer and that it was significantly more votes, it seems that it's helpful for more people.