Hacker News new | past | comments | ask | show | jobs | submit login

Binary search. Very simple, incredibly powerful; can search on data or math function. It's the basis for other CS concepts.

JS implementation: https://gist.github.com/netgusto/90c8e0e7019a832cbf95eac58e1...




Regarding "very simple" - as I recall from some book, first bug-free implementation appeared only after several years after invention / initial description of the algorithm. From wikipedia: "A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks" (https://en.wikipedia.org/wiki/Binary_search_algorithm).


Someone did a blog about that book, and challenged people to do their own bug-free implementations. https://reprog.wordpress.com/2010/04/19/are-you-one-of-the-1...


It was in Programming Pearls, IIRC.


I like this version:

  low = -1
  high = arr.length

  while(high - low > 1){
    mid = (high + low)/2
    if(arr[mid] <= x) low = mid else high = mid
  }
This eliminates all branches from the body of the loop (gets compiled to conditional move). It's also more general because it can be used to find the right position to insert a new element in a sorted array.


and it's also a broken version.


Actually, I can definitely say that this version is not broken, i.e. I've formally proven it to be mathematically correct assuming that:

1. high and low are bigints (such as when using Python, for example)

2. the input array is sorted (needed for any binary search algorithm)

3. this missing code at the bottom is added:

  if (0 <= low && low < arr.length) {
    if (arr[low] == x) {
      return low;
    }
  }
  return -1;  // (or "raise Not_found" or whatever you use to indicate that 'x' was not found).
You can find the formal proof in WhyML below. This includes proofs that:

1. all array accesses are in bounds [checked automatically]

2. there is no undefined behavior (such as division by zero) [checked automatically]

3. the function always terminates [enforced automatically, proven with the help of the 'variant' clause]

4. there are no overflows [checked automatically, not possible since we're using bigints]

5. if an index is returned, the array has an element with value equal to x at that index [the 'ensures' clause]

6. if instead the function returns "not found", the array does not have any element with a value equal to x [the 'raises' clause]

... all this assuming that the input array is sorted [the 'requires' clause]

Code + proof here (it makes more sense if you can read OCaml syntax): https://clbin.com/jbTk8


Awesome! Interesting that it can prove that given only the loop invariant.

The inner loop could be factored to a subroutine with the following contract: let P be a predicate on the integers such that (i <= j) => (P(i) => P(j)), and let low be such that P(low) = false and high such that P(high) = true. The subroutine returns i such that P(i) = false and P(i+1) = true. The subroutine further promises to only call P(k) on low < k < high.

This may be applied to the predicate P(k) = (x <= arr[k]) on the range (low,high) = (-1,arr.length) to find the point at which the predicate flips from false to true (we're essentially pretending to know that there is a small value at arr[-1] and a large value at arr[arr.length], and the subroutine promises to only call P(k) on values strictly between -1 and arr.length).

Is it possible to do this with WhyML?


Actually, my loop invariant was unnecessarily confusing (I was sleep deprived), here's a better version: https://clbin.com/OhyJ3

Your question is very interesting and I think the answer is 'yes'. I will try to implement it and report back.


Ok, so this proved to be more difficult than I anticipated and it took me a while, but fortunately I really enjoyed the challenge!

It seems that the answer is definitely "yes", it can be done with WhyML.

There are a few minor caveats:

1. I had to change the function with the while loop to return an option instead of raising the Not_found exception, which makes the code a bit uglier. This is because I ran into the following bug in Why3 (it's already fixed in the master branch, but not in the most recent release): https://gitlab.inria.fr/why3/why3/issues/214

2. It turns out that the value of "high" is not statically known at compile time, because it depends on the size of the array. So I changed "high" to be a function of the array rather than a constant (I did the same to "low" for consistency).

3. Similarly and consequently, the predicate P(i) is not only a function of "i". It's also a function of the value being searched for ("x"), i.e. if P(i) is true or not depends on which value you're searching for. This also means that P() is also a function of the specific array being searched for (because some arrays may have "x" but others not).

4. You said that "we're pretending to know that there is a small value at arr[-1] and a large value at arr[arr.length]", but unless I misunderstood something, we can't actually pretend that, this must be written in the code somewhere, otherwise it won't work. For simplicity, I have chosen to implement those rules in the predicate itself. However, this is also enforced by the function with the while loop, i.e., it always makes sure that whatever predicate you have chosen, it has to implement those rules. I am sure that this could be changed, but I think it would complicate the proofs.

The result is that in module "GenericBinarySearch", like the name says, we have a neat generic binary search function which works with any predicate P (called "pred" in the code), any function "low", any function "high" and for any type.

And in module "ArrayBinarySearch" we have an instantiation of "GenericBinarySearch" which works for sorted arrays!

Also note that the axiom in the code is only an axiom in the generic module (it specifies what must be true of P()), but when the generic module is instantiated, Why3 verifies that the axiom is actually true for the specific instance of the predicate (so there is no cheating).

You can find the code in the link below. Enjoy!

https://clbin.com/Ov3NX


Very nice. I don't quite understand why you need the option in GenericBinarySearch. Doesn't the function always return Some?

I had something like this in mind:

    let binary_search (pred: int -> bool) (low: int) (high: int): int
        requires { pred low = False }
        requires { pred high = True }
        requires { forall i j. (i <= j) -> (pred i -> pred j)}
        ensures  { pred result = False /\ pred (result+1) = True }
    =
        let cur_low = ref low in
        let cur_high = ref high in

        while !cur_high - !cur_low > 1 do
            variant { !cur_high - !cur_low }
            invariant { pred !cur_low = False /\ pred !cur_high = True }
            let mid = div (!cur_high + !cur_low) 2 in
            if not (pred mid) then
                cur_low := mid
            else
                cur_high := mid
        done;
        !cur_low
Can something like that work?

  let predicate pred (i: int) (x: t) (k: int)
  = if i <= low x then false else
    if i >= high x then true else
    k <= x.arr[i]
Those tests are only necessary for the proofs, right? At run time the predicate will never be called with i <= low or i >= high, but only on low < i < high, because if high - low > 1 then low < (high + low)/2 < high. That's what I meant by pretending that the predicate returns false/true on -1 or arr.length: we don't actually need the predicate to work on those indices, because we never call the predicate at those indices.


> Can something like that work?

Yes, indeed! Your version is not only a lot more simple and elegant than mine, but your proof code was enough for Z3 to automatically figure out that everything works.

Therefore, I hereby declare that you are ready to use and learn more about Why3/WhyML.

Here's a working complete implementation of your version: https://clbin.com/cwvKY

There's only a minor caveat with your version: there's no formal guarantee that `binary_search` won't call `pred` below `low` or above `high`, because (as far as I know) we can't encode those requirements if we pass the function as an argument. Maybe there's a way to do that, but I don't know WhyML that deeply.

> Those tests are only necessary for the proofs, right?

I think so. We can probably separate `pred` into a logical version and a run-time version, making sure that the result of the logical `pred` matches the result of the run-time `pred` when `-1 < idx < length arr`.

I think this should be very easy to do with WhyML, I will try and figure out if I can do it.


That would be nice, then the run time predicate can be fast while the logical version remains elegant.

What happens if you remove requires { forall i j. (i <= j) -> (pred i -> pred j)}? I think it should still check, actually. That property ensures that the answer will be unique, but the algorithm will find a point pred i = False /\ pred (i+1) = True even if the predicate does not satisfy that property.

If you add the uniqueness to the ensures, does Z3 still automatically prove it correct?

  ensures { forall j. pred j = False /\ pred (j+1) = True -> j == result) }


> What happens if you remove requires { forall i j. (i <= j) -> (pred i -> pred j)}? I think it should still check, actually.

If I simply remove that 'requires', then Why3 cannot prove the postcondition of `binary_search` automatically anymore (using the Z3, CVC4 and Eprover automatic provers that I have installed).

Specifically, Why3 tries to split the postcondition into the 2 parts: `pred result = False`, which gets verified correctly, and `pred (result+1) = True`, which cannot be verified automatically.

I think this is because nothing stops `high` from actually being lower than `low`. If someone passes those 2 arguments reversed (high as low and low as high), then probably the algorithm wouldn't work, right? At least, Why3 is not convinced that it would work.

However, if I add this loop invariant: `!cur_low < !cur_high` and this function precondition: `requires { low < high }`, then it all works fine! (both are required).

Diff here: https://clbin.com/wVkhO And full code here: https://clbin.com/2gBfJ

I tried separating `pred` into logical and run-time versions (one with the ifs and the other only with the comparison), and it all seems to work fine, except for one problem:

The run-time version of `pred` is a partial function (it only works for valid array indices), so it needs a precondition. However, when I pass `pred` as an argument to `binary_search`, I can't / don't know how to specify that the argument needs the precondition.

Therefore, Why3 complains that the precondition of `pred` may not be respected (all other verification conditions are proven to be valid).

I could do what I did before (making `low` and `high` functions, etc) but that greatly complicates the code...

Maybe there is some way to do that, but currently I don't know how.


> I think this is because nothing stops `high` from actually being lower than `low`.

Ahh, right. I guess that's exactly the type of oversight that a checker is for :)

We could return the pair (!cur_low, !cur_high) and have the postcondition that pred (fst result) = False and pred (snd result) = True and abs (first result - snd result) = 1. Then it would work also if low > high, but I'm not sure this is useful in practice...

> The run-time version of `pred` is a partial function (it only works for valid array indices), so it needs a precondition. However, when I pass `pred` as an argument to `binary_search`, I can't / don't know how to specify that the argument needs the precondition.

If I'm understanding this correctly, you want to do something like this:

  let binary_search (pred: (i:int) -> bool requires { low < i < high }) (low: int) (high: int): int
But Why3 does not support this?

If you add a precondition like that to pred, wouldn't that also prevent requires/ensures/invariant from calling pred on arguments that don't satisfy the precondition? In the precondition we do want pred low = False /\ pred high = True, but the run time predicate only allows pred k for low < k < high?


> If I'm understanding this correctly, you want to do something like this:

Exactly!

> But Why3 does not support this?

As far as I can tell, it doesn't. I get a syntax error if I either try to name the argument to pred or if I try to add a 'requires {}'.

Maybe they will add this functionality to a future version, or maybe there is already a different but simple way to do this (but I don't know how).

> If you add a precondition like that to pred, wouldn't that also prevent requires/ensures/invariant from calling pred on arguments that don't satisfy the precondition?

No, logical/proof functions are always total, they cannot be partial.

One option is to always define what the function should return for the entire domain of its arguments.

The other main option AFAIK is to define the results only for a restricted domain that interests us and leave the function undefined outside this restricted domain (but in this latter case, we won't be able to extract conclusions about what the function returns outside this restricted domain).

However, as far as I know, the latter option needs to be implemented differently in Why3, specifically as an abstract predicate/function, and then you separately define axioms about things you know about the predicate/function. The disadvantage is that if you make a mistake in one of the axioms (say, you accidentally define that the function returns both True and False for the same input), then you are introducing an inconsistency which allows you to prove anything you want (i.e. you would be able to trivially prove that 2 = 3). This is undesirable, of course.

I think I saw somewhere that if you define a predicate `P` in Why3 that works both for runtime and for proofs, and then you add a precondition `A` to this predicate, then Why3 will automatically add an implication to the predicate for proofs, i.e. if use this predicate in a proof, the predicate will become `A -> P(x)` instead of just `P(x)`. But I'm not entirely certain about this, I could be wrong.

Unfortunately Why3 is not very well documented, the vast majority of what I've learned so far has been through reading the examples, looking through the git history, and trial-and-error.

> In the precondition we do want pred low = False /\ pred high = True, but the run time predicate only allows pred k for low < k < high?

Exactly, this is why I was trying to add a new predicate (for proofs only), which returns False for i <= low and True for i >= high, but calls the other predicate otherwise.

However, I still run into the same problem: I cannot specify that a function argument needs a precondition, and therefore Why3 cannot tell that the precondition doesn't get violated...


thanks, this is a great comment(and the rest of the thread). Yes, I meant that it doesn't work without assuming #1, and it had missing #3.


How so? Are you talking about (high + low)/2? Whether that is correct depends on the language. With 64 bit arithmetic it's correct in practice, and with arbitrary size arithmetic it's correct in practice and in theory. It only really causes problems if you're using 32 bit arithmetic on a 64 bit machine.


Just keep telling yourself that.

Or, look up the right answer.


Perhaps instead of being mean, you could be helpful and post a link to said answer.


With 64 bit variables it probably works. But for less here is the correct implementation: https://ai.googleblog.com/2006/06/extra-extra-read-all-about...


This implementation:

    mid = ((unsigned int)low + (unsigned int)high)) >> 1;
essentially extends the bit width from 31 bit to 32 bit by going from signed to unsigned, and is thus correct up to array lengths of order 2^31 rather than 2^30. Using an even larger bit width should classify as at least as correct as this.

In summary,

(low + high)/2 or (low + high) >> 1 is correct up to 2^30 for 32 bit signed and up to 2^62 for 64 bit signed, up to 2^31 for 32 bit unsigned, and up to 2^63 for 64 bit unsigned.


A cool thing about binary search is you can also use it in real life. It's great for troubleshooting.


`git bisect` is an indispensible git trick. You give it commit where you know the bug exists, and one where you know it doesn't and it'll use binary search to help you find where the bug came from. It's absolutely wonderful.


I always struggle to see why people bang on about git bisect. You need tests to make it work. If you have tests, why aren't you running them continuously? If you're running them continuously, why do you need git bisect?


Sure; if you never write code that contains bugs that your unit tests don't catch, then "git bisect" is probably useless to you.

As for the rest of us mere mortals, we sometimes write code that has bugs without discovering those bugs right away. In a complex system, it might not be easy to observe an incorrect behavior and immediately deduce the root cause. "git bisect" allows you to retroactively track down the commit that introduced a behavior change, even if it was something you didn't originally think to test for.


If the bug is a regression, you write the test after finding it, and execute it with git-bisect. If the test starts to pass, you have the commit where it will fail.


If I can write a failing test, I can likely fix it. If I am interested in seeing who broke it and when, I can run git blame on the thing I'm fixing.

Given that my test suite is generally in the same repo as the code, I'd need to write something that patched my new test into the codebase at every git bisect commit, recompile and run it.

I can see that this may occasionally be useful if you have an extremely hard to find bug, but for me it's pretty rare. In fact I've done this once, ever.

Hence my skepticism when people describe this as "git's killer feature" or whatever other hyperbole.


You don't really need automated tests. You can use a reproducible bug report as test input, then use git-diff to determine the first point it time it appears.


Because your tests didn't have 100% coverage.


It's useful for projects where maintaining tests is impractical such as the linux kernel. When someone tries to find a regression in the kernel they are probably going to write a test which grep's dmesg for a certain output, or sometimes the regression is a kernel panic and you are basically checking if your system crashes or not on each bisect step.


Yep, I use this all the time to figure out which SVN commit introduced a bug (run the last deployed version and verify it didn't have the bug, then binary search the commits in-between that and HEAD until I figure out which one caused it).


I recently learned about git bisect. Very useful!


git bisect is cool, but this also works with like electronic circuits, AV signal chains, etc...


Have a look at interpolation search.

The complexity is O(log log n), under the assumption of a uniform distribution of the data.

https://en.wikipedia.org/wiki/Interpolation_search


I've used it as a very simple equation solver in a pinch. Yes, it's a very naive approach for most equations, but getting started solving it satisfactory with a solution given is very welcome.


...wait, how? I’ve never heard of a use like this.


If f is monotonic and continuous, you can solve f(x) = y for x by successively narrowing an interval [a, b] where f(a) <= y <= f(b) (or vice versa for decreasing functions). In the case that the range of f spans the entire real numbers, the initial range can be determined as the smallest range [-2^k, 2^k] where k is an integer (a strategy often used for open-ended binary search).


Interestingly, with floats you can also binary search on the binary representation. That narrows down the answer to the most precise answer representable in floating point in at most 64 steps for 64 bit floats. If you bisect the interval in the middle at each step, then it could take more steps (possibly more than 1000).

It turns out that doing this is pretty much equivalent to what you suggest, if you narrow down the initial range [-2^k, 2^k] by binary searching on k, because that's just binary searching on the exponent in the binary representation.


Interesting! I didn't really think about this but this is how I implemented sqrt() and division in an interview. Didn't think of how it generalizes!


Isn't this bit:

const mid = Math.floor((right + left) / 2);

susceptible to overflow?

EDIT: Hm, perhaps not (in JS). Number.MAX_SAFE_INTEGER is much greater than I expected.


Indeed, Java's binary search had this bug for a while! It was found through automatic verification, by the way.

https://ai.googleblog.com/2006/06/extra-extra-read-all-about...


also some CS books


Incidentally, as outlined in this Julia FAQ, native machine arithmetic with overflow will deal with this correctly.

    mid = (lo + hi) >>> 1
https://docs.julialang.org/en/v1/manual/faq/index.html#Why-d...

EDIT: legibility


Thank you, fixed it!

const mid = Math.floor(left + (right - left) / 2);

https://stackoverflow.com/q/27167943


Not in Javascript, where everything is a double precision float. You would lose precision at about 2^51, but that’s not a limit that will meaningfully affect us for good while.


Oh really? And what if you aren't storing the data? You just want to find such n, that f(n) >= m and f(n+1) < m. Now perhaps it might become an issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: