In my opinion the misunderstanding arises due to the lack of understanding of what log n means, and not so much O notation.
Intuitively, logarithms are much easier to understand in the context of exponentials. For example, if there is "something" (be it a function, space, money, etc.) which starts at value "c" and doubles with each iteration: c, 2c, 4c, 8c,..., that is what we call exponential growth and iteration n would be represented by the following notation: c * 2^n.
Where does the logarithm come in? It is defined in the following way:
if a^b = c, then log_a c = b (note: read as log of c in base a equals to b)
so an exponential relationship can always be written in terms of logarithms.
Now, if the answer that we are looking for is not the actual value, but how many iterations did it take to get there ("b" above), that is the direct result of applying the logarithm to the function.
How does it all apply to functions execution time and Big O notation?
Well, if we have a space N that we are searching over via a function F, and this function does so iteratively by dividing search space N into 2 at each iteration and discarding the wrong half, we will have the following sequence representing the space to search over at each iteration: N, N/2, N/4, ..., N * (1/2)^n.
If the question is, how long will it take this function to run? The answer is that starting at space N, the function will run until it has narrowed down the space to 1, so the search is over and it found (or not) what it was looking for. So the question is really, how many iterations will the function need to do? And as explained before this is exactly the definition of logarithm.
Therefore: ('~' defined as 'proportional to')
time to run ~ space to search ~ # of iterations ~ log_2 (N*(1/2)^n) = constant + log (n)
In general, intuitively (not always true though) if there is a function that at each iteration is dividing the size of the work to be done by a constant factor (2,3,4,...) the run time, that is the number of iterations, will be O(log n).