Fun fact, there are some other lessons here: it can sometimes pay off to (1) generalize your function, and (2) respect the mathematical axioms you're supposed to be following.
This (obviously) isn't to say you should always generalize everything, but you should at least consider what would happen if you did so, and if the difference is small, perhaps do it. The benefit of doing so being that it can avoid problems that aren't otherwise obvious—sometimes by design, sometimes by accident.
In particular, (x + y) / 2 is the wrong implementation of midpoint in general, because it would fail to even compile on objects you can't add together. But midpoint is well-defined on anything you can subtract (i.e. anything you can define a consistent distance function for)—and it doesn't require addition to be well-defined between those objects!
One obvious (in C/C++, and not-so-obvious in Java) counterexample here is pointers/iterators. You can subtract them, but not add them. And, in fact, if you implement midpoint in a manner that generalizes to those and respects the intrinsic constraints of the problem, you end up with the same x + (y - x) / 2 implementation, which doesn't have this bug.
Interesting. Another example is datetimes. You can't add datetimes. You can add a datetime and a time delta, and the difference of two datetimes is a timedelta.
I guess in maths this is called a generating Lie algebra (maybe someone can comment on this?)
1. You have a 0 time delta, and you can add and subtract them satisfying some natural equations. (time deltas form a group)
2. You can add time deltas to a datetime to get a new datetime, and this satisfies some natural equations relating to adding time deltas to each other (time deltas act on datetimes).
3. You can subtract two datetimes to get a time delta satisfying some more natural equations (the action is free and transitive).
The term I was looking for was affine structure, as I commented to someone else. But from your link, which I can't understand entirely, I get the sense that a torsor is an even bigger generalization.
> The term I was looking for was affine structure, as I commented to someone else. But from your link, which I can't understand entirely, I get the sense that a torsor is an even bigger generalization.
An affine space is a torsor under a vector space, and you can have instead a torsor under any group. This loses a bit of structure, in the sense that you can take convex combinations in an affine space but not in an arbitrary torsor; but otherwise it is a proper generalisation. But the convex combination $(a + b)/2$ used to obtain a midpoint is exactly what we want here!
Indeed, torsors have exactly the properties you describe, but notably not the ability to find the midpoint between two points (that would involve extracting square roots in a group, which is not guaranteed possible, or uniquely defined when possible).
And in fact finding the midpoint is not possible half the time in the space we're interested in (https://news.ycombinator.com/edit?id=33497270). So what is the algebraic structure that underlies the binary-search algorithm, since evidently it isn't really the torsor of a group?
> So what is the algebraic structure that underlies the binary-search algorithm, since evidently it isn't really the torsor of a group?
Though it pains me to say so as an algebraist, I think that it probably just isn't a problem most usefully modelled with a more abstract algebraic structure. Although it would be easy to cook up a structure permitting "division with rounding" … maybe a Euclidean domain (https://en.wikipedia.org/wiki/Euclidean_domain) is something like the right structure?
I'm not an algebraist! So maybe what I'm about to say is dumb:
As I understand it, the correctness of any correct algorithm is conditioned on some requirements about the data types it's operating on and the operations applicable to it. Once you formalize that set of requirements, you can apply the algorithm (correctly) to any data type whose operations fulfill them.
But some sets of data and some operations on them that fulfill some formally stated requirements are just an abstract algebra, aren't they? Like, if you have two data types {F, V} and operations on them {+, ×, ·, +⃗} that fulfill the vector-space axioms, they're a vector space, so any vector-space algorithm will work on them. So every algorithm (or, more accurately, every theorem about an algorithm) defines some algebraic structure (or several of them), which may or may not be a well-known one.
For binary search you have two sets: the set of indices I, which is what we've been talking about, and the set E of elements that might occur in the array a you're searching through. You need a total order on E, and I think you need a total order on I, and the array a needs to be sorted such that aᵢ ≤ aⱼ if i < j (though not conversely, since it might contain duplicates). You need a midpoint operation mid(i, j) to compute a new index from any two existing indices such that i ≤ mid(i, j) < j if i < j. And "mid" needs a sort of well-foundedness property on I to guarantee that the recursion eventually terminates; for any subset of ℤ you can define a "mid" that fulfills this, but for example in ℚ or ℝ you cannot, at least not with the usual ordering.
I don't think a Euclidean domain is quite the right thing for I because there's no obvious total order, and I think you need a total order to state the sortedness precondition on the array contents. Also, lots of Euclidean domains (like ℚ or ℝ) are dense and therefore could lead you to nonterminating recursion. And it isn't obvious to me that "mid" requires so much structure from I.
If you care about complexity you also have to define the cost of operations like ≤, mid, and array indexing, so your algorithm is no longer defined on just an abstract algebra, but an abstract algebra augmented with a cost model. But for correctness and termination that isn't required.
> But some sets of data and some operations on them that fulfill some formally stated requirements are just an abstract algebra, aren't they?
Not quite. A variety of algebras (which is usually what people have in mind when they talk about "algebraic structures" in general) is a collection of operations with equational laws, meaning that they're of the form "for all x_0, x_1, ... (however many variables you need), expression_1(x_0, x_1, ...) = expression_2(x_0, x_1, ...)", where the expressions are built up out of your operators.
Fields are the classic example of a structure studied by algebraists which is nonetheless not a variety of algebras: division isn't described by equational laws, because it's defined for everything except zero. This makes fields much harder to work with than e.g. groups or rings, in both math and programming.
> I don't think a Euclidean domain is quite the right thing for I because there's no obvious total order, and I think you need a total order to state the sortedness precondition on the array contents. Also, lots of Euclidean domains (like ℚ or ℝ) are dense and therefore could lead you to nonterminating recursion. And it isn't obvious to me that "mid" requires so much structure from I.
I certainly agree that one can create a structure that encodes the algebra of binary search, and, at a casual glance, your definition looks good to me. I meant only that such a structure seems unlikely to do much more than to encode binary search (unlike, say, affine spaces, which are useful in a very broad variety of settings for which they have not been purpose-built) … although of course any mathematical structure, howsoever abstract or specialised, will find other uses if enough people get interested in it.
well, it'd be nice if it turned out that it was a consequence of some other well-known structure that was more general than just 'subsets of the integers'. but maybe it isn't.
Not all metric spaces have midpoints (or unique midpoints) so it’s not true you can compute a midpoint any time you have a distance function (you are right you can define it but that’s kind of useless computationally since it doesn’t give you an algorithm).
If we're going the pedantic route, note that you don't need (and in fact half the time cannot have) uniqueness in our case anyway. There isn't really a unique midpoint for {0, 1, 2, 3}; both 1 and 2 are valid midpoints, even for binary search. We just pick the first one arbitrarily and work with that.
But note that that sentence was just about calculating midpoints, not about the larger binary search algorithm. And in any case, I was just trying to convey layman intuition, not write a mathematically precise theorem.
This should also be obvious after a bit of thought to anyone who has worked with timestamps, and is also well-known in e.g. animation where midpoint is just a special case of p=0.5.
There are countably infinite turing machines and there is one for every element in Z. But there are uncountably infinite real numbers, so we’re out of luck for almost all of them.
The bug in question is trying to compute an average as
avg = (x + y) / 2
which fails both for signed ints (when adding positive x and y overflows maxint) and for unsigned ints (when x + y wraps around 0). Note that this can only be considered a bug for array indices x,y when these are 32 bit variables and the array can conceivably grow to more than 2 billion elements.
I wonder what is the simplest fix if the ordering between x and y is not known (e.g. in applications when x and y are not range bounds) and the language has no right-shift operation...
Binary search can be done on anything, not just arrays. Often you apply it to an algorithm and there isn't a collection at all, you just know the right answer is between some numbers so binary search lets you find it in logarithmic number of tries. If computing the number is costly then binary search is necessary to compute the result at all in those cases.
C++ has had a correct binary search in its standard library since c++98, and it works on pointers, integers etc, both signed and unsigned without overflows. I'm not sure why they say that this doesn't exist.
And this is exactly why I like to use higher level programming languages. Let someone smart figure all this out for me, and give me (grug) a generic binary search routine that works on arbitrary collections of arbitrary ordered things.
In general (x + (y - x) / 2) is more general than (x + y) / 2. If x and y are not in some group, but rather in the torsor of some group, you can't really sum them. Any attempt to do so involves introducing some arbitrary reference point. You can always do this, but once you do, you're at risk of your calculation results depending on the choice of arbitrary reference point and hence being meaningless.
The difference of two elements of the torsor of some group G is an honest-to-God group element of G, though, and so you have an honest-to-God identity element. You may or may not have an honest-to-God division or halving operator (which computes e given (e + e)) but in cases where G is the additive group of some field you do.
However, in this case our array indices are drawn from something like ℤ/2³²ℤ, and we might be trying to halve odd numbers, so none of this is justifiable! We want something different from our halving operator.
> The difference of two elements of the torsor of some group G is an honest-to-God group element of G, though, and so you have an honest-to-God identity element. You may or may not have an honest-to-God division or halving operator (which computes e given (e + e)) but in cases where G is the additive group of some field you do.
This was always my go to interview question when I wanted to smugly prove to someone I’m smarter than them because I knew in fact they were smarter than me and I was feeling insecure. Good to see others use overflow gotchas too.
My favorite was; write a function that determines the number of games necessary to be played in a single elimination tournament with N participants. It’s interesting to watch how many go off into recursion land when they get into the mind set of solving these Leet Code puzzles.
My favorite is when interviewers expect you to know sportsball stuff like tournament elimination rules when interviewing programmers who clearly don’t care about sportsball
Then cast to unsigned int before the division (i.e., use a non-arithmetic shift low).
Then cast back to signed int.
func Search(n int, f func(int) bool) int {
// Define f(-1) == false and f(n) == true.
// Invariant: f(i-1) == false, f(j) == true.
i, j := 0, n
for i < j {
h := int(uint(i+j) >> 1) // avoid overflow when computing h
// i ≤ h < j
if !f(h) {
i = h + 1 // preserves f(i-1) == false
} else {
j = h // preserves f(j) == true
}
}
// i == j, f(i-1) == false, and f(j) (= f(i)) == true => answer is i.
return i
}
If you care about stuff like this you may enjoy the puzzle "Upside-Down Arithmetic Shift":
The solution here is not really interesting except from a language design perspective. Go avoids this problem by having the maximum array length be int, but doing the math in uint. This won’t work in languages that lack uints (Java) or have maximum array sizes in uint (C/C++).
Go was designed by (among others) the father of Unix Ken Thompson, with an understanding of the mistakes of C and C++.
Another example is that Go requires explicit integer casts (disallowing implicit integer casts) to avoid what is now understood to be an enormous source of confusion and bugs in C.
You can understand Go as an improved C, designed for a world where parallel computing (e.g., dozens of CPU cores) is commonplace.
Well, that's arguable. This "mistake" could be fixed in C tomorrow without breaking the semantics of any existing C code, but notice that this hasn't been "fixed" in any of the latest C standards, so perhaps it's still there for a reason.
And the reason it hasn't been "fixed" is that compilers can optimize code better if they can assume that signed addition won't overflow.
So it's more of a trade-off rather than strictly being a disadvantage.
It's also something you can "fix" in your own code if you want to, by passing a compiler flag (-fwrapv in gcc), although arguably, at that point your code wouldn't be strictly C-standard compliant anymore. Or by using some library that handles arithmetic overflow by wrapping around, which could be implemented in standard C.
> Another example is that Go requires explicit integer casts (disallowing implicit integer casts) to avoid what is now understood to be an enormous source of confusion and bugs in C.
I agree with you on this, although forcing explicit casts also makes the code more verbose and can make it harder to understand what's going on.
I think a balanced approach is requiring explicit casts only for the "tricky" cases, i.e. when the values might become truncated, and possibly also when sign-extension might be necessary and therefore might result in something the programmer didn't expect.
But if I were to design a language I'm not sure that I would require explicit casts for e.g. promoting a uint16_t to a uint32_t...
> You can understand Go as an improved C, designed for a world where parallel computing (e.g., dozens of CPU cores) is commonplace.
That's a bit of a hot take :) Let me know when the Linux kernel starts to get rewritten in Go ;)
None of that requires or even supports undefined overflow.
> Signed integer expression simplification [...]
This is easily addressed by allowing (but not requiring) implementations to keep wider intermediate results on signed overflow, to a unspecified width at least as wide as the operands.
> the index calculations are typically done using 32-bit int
No, they're done using size_t; that's what size_t is for.
> undefined overflow ensures that a[i] and a[i+1] are adjacent.
No, the fact that i (of type size_t) < length of a <= size of a in bytes <= SIZE_MAX ensures that a[i] and a[i+1] are adjacent.
(Also, note that this does not mean the optimizer is allowed to access memory at &a[i+1], since that might be across a page boundary (or even the wraparound from address ...FFF to 0), which makes adjacency less helpful than one might hope.)
> Value range calculations [...] Loop analysis and optimization [...]
Allowed but not required to retain excess bits on signed overflow.
> > Signed integer expression simplification [...]
> This is easily addressed by allowing (but not requiring) implementations to keep wider intermediate results on signed overflow, to a unspecified width at least as wide as the operands.
> > Value range calculations [...] Loop analysis and optimization [...]
> Allowed but not required to retain excess bits on signed overflow.
What does that even mean? That the code would have different behavior with a different compiler, different optimizations or when you slightly change it (depending on whether the compiler chooses to keep wider intermediate results or not)?
If understand you correctly, then depending on the values of x and y, `(x+1)<(y+3)` would have a different result than `(x+a)<(y+b)` when a=1 and b=3, because in the first case you would be simplifying the expression but in the second case you couldn't.
That would be quite surprising, to say the least.
> No, they're done using size_t; that's what size_t is for.
for (int i = 0; i < 256; i++)
a[i] = b[i] + c[i];
So you've never seen code like this? Not all arrays are huge and for small indices people usually go for `int`.
Also, when doing arithmetic with indices, you might need to represent a negative index, so `size_t` wouldn't work. You'd need to use `ssize_t`, which is a signed integer which would benefit from these optimizations.
But even then you might use an `int` if you know the arithmetic result will fit.
> No, the fact that i (of type size_t) < length of a <= size of a in bytes <= SIZE_MAX ensures that a[i] and a[i+1] are adjacent.
Not if `i` is a signed integer, say, `int` or `int8_t`. Which is the point of the optimization.
> depending on the values of x and y, `(x+1)<(y+3)` would have a different result than `(x+a)<(y+b)` when a=1 and b=3
If x > TYPEOFX_MAX or y > TYPEOFY_MAX-2, then this can already happen with the more (maximally) vague "signed overflow is completely undefined" policy; wider intermediates just mean that code is not allowed to do other things, like crash or make demons fly out of your nose.
If x+[a or 1] and y+[3 or b] don't overflow, then computing them in a wider signed integer type has no effect, since the same values are produced as in the original type.
More generally, retained overflow bits / wider intermediates (instead of undefined behaviour) mean that when you would have gotten undefined behaviour due to integer overflow, you instead get a partially-undefined value with a smaller blast radius (and hence less opprotunity for a technically-standards-conformant compiler to insert security vulnerabilities or other insidious bugs). In cases where you would not have gotten undefined behaviour, there is no signed integer overflow, so the values you get are not partially-undefined, and work the same way as in the signed-overflow-is-undefined-behaviour model. ... you know what; table:
signed overflow is \ no overflow unsigned overflow signed overflow
undefined behaviour: correct result truncated mod 2^n your sinus cavity is a demon roost
wide intermediates: correct result truncated mod 2^n one of a predictable few probably-incorrect results
> If x > TYPEOFX_MAX or y > TYPEOFY_MAX-2, then this can already happen with the more (maximally) vague "signed overflow is completely undefined" policy; wider intermediates just mean that code is not allowed to do other things, like crash or make demons fly out of your nose.
Yes, but saying "signed overflow is completely undefined" simply means that you are not allowed to do that, so this is a very well-defined policy and as an experienced programmer you know what to expect and what code patterns to avoid (hopefully).
If you say "signed overflow is allowed" but then your code behaves nondeterministically (i.e. giving different results when signed overflow happens, depending on which compiler and optimization level you're using or exact code you've written or slightly changed), I would argue that would actually be more surprising for an experienced programmer, not less!
It would make such signed overflow bugs even harder to detect and fix! As it would work just fine for some cases (or when certain optimizations are applied or not, or when you use a certain compiler version or Linux distro) but then it would completely break in a slightly different configuration or if you slightly changed the code.
And it would prevent tools like UBSan from working to detect such bugs because some code would actually be correct and rely on the signed overflow behavior that you've defined, so you couldn't just warn the programmer that a signed overflow happened, as that would generate a bunch of false alarms (especially when such signed-overflow-relying code was part of widely used libraries).
> More generally, retained overflow bits / wider intermediates (instead of undefined behaviour) mean that when you would have gotten undefined behaviour due to integer overflow, you instead get a partially-undefined value with a smaller blast radius (and hence less opprotunity for a technically-standards-conformant compiler to insert security vulnerabilities or other insidious bugs).
C compilers are already allowed to do what you say, currently. But I'm not sure that relying on that behavior would be a good idea :)
I think it's preferable that the C standard says that you are not allowed to overflow signed integers, because otherwise the subtlety of what happens on signed overflow would be lost on most programmers and it would be very hard to catch such bugs, especially due to code behaving differently on slightly different configurations (hello, heisenbugs!) and also because bug-detection tools couldn't flag signed overflows as invalid anymore.
> saying "signed overflow is completely undefined" simply means that you are not allowed to do that
No, saying "signed overflow is a compile-time error" means you're not allowed to do that. Saying "signed overflow is completely undefined" means you are allowed to do that, but it will blow up in your face (or, more likely, your users' faces) with no warning, potentially long after the original erroneous code change that introduced it.
> As it would work just fine for some cases (or when certain optimizations are applied or not, or when you use a certain compiler version or Linux distro) but then it would completely break in a slightly different configuration or if you slightly changed the code.
That sentence is literally indistinguishable from a verbatim quote about the problems with undefined behaviour. Narrowing the scope of possible consequences of signed overflow from "anything whatsoever" to "the kind of problems that are kind of reasonable to have as a result of optimization" is a strict improvement.
> And it would prevent tools like UBSan from working
> bug-detection tools couldn't flag signed overflows as invalid
The standard says `if(x = 5)` is well-defined, but that doesn't stop every competently-designed (non-minimal, correctly configured[0]) compiler from spitting out something to the effect of "error: assignment used as truth value".
0: Arguably a fully competently designed compiler would require you to actually ask for -Wno-error rather than having it be the default, but backward compatibility prevents changing that after the fact, and it would require configuring -Wall-v1.2.3 (so build scripts didn't break) anyway.
> > saying "signed overflow is completely undefined" simply means that you are not allowed to do that
> No, saying "signed overflow is a compile-time error" means you're not allowed to do that.
In the vast majority of cases it's not possible to statically determine if signed overflow will occur, so compilers can't do that. I'm sure they would do it, if it were possible.
> Saying "signed overflow is completely undefined" means you are allowed to do that, but it will blow up in your face
No, it does not mean that. You're not allowed to do signed overflow in standard-compliant C, period.
You're allowed to do signed arithmetic, but the arithmetic is not allowed to overflow. You can write code that overflows, but it will not have defined semantics (because that's what the standard says).
And the compiler cannot enforce or emit a warning when overflow occurs because in the general case it's not possible to statically determine if it will occur.
But if the compiler can determine it, then it will emit a warning (at least with -Wall, I think).
And if you pass the `-ftrapv` flag to GCC (and clang, probably), then your code will deterministically fail at runtime if you do signed overflow, but for performance reasons this is not required by the standard.
> > As it would work just fine for some cases (or when certain optimizations are applied or not, or when you use a certain compiler version or Linux distro) but then it would completely break in a slightly different configuration or if you slightly changed the code.
> That sentence is literally indistinguishable from a verbatim quote about the problems with undefined behaviour. Narrowing the scope of possible consequences of signed overflow from "anything whatsoever" to "the kind of problems that are kind of reasonable to have as a result of optimization" is a strict improvement.
No, because experienced programmers don't expect signed overflow to work, because it's not allowed. Such bad code would be caught if you enable UBsan. But if the C standard would require what you propose, then UBsan could not fail when a signed overflow occurs, as that could be a false positive (and therefore make such signed-overflow detection useless).
If you allow signed overflows then you have to define the semantics. And nondeterministic semantics in arithmetic is prone to result in well-defined but buggy code, while in this case also preventing bug-detection tools from being reliable.
That said, you could conceivably implement such semantics in a compiler, which you could enable with some flag, like for example, -fwrapv causes signed overflow to wraparound and -ftrapv causes signed overflow to fail with an exception.
So you could implement a compiler flag which does what you want, today, and get all the benefits that you're proposing.
You could even enable it by default, because the C standard allows the behavior that you're proposing, so you would not be breaking any existing code.
And this would also mean that existing and future C code would still be C-standard compliant (as long as it doesn't rely on that behavior).
But making the C standard require that behavior means that there will be well-defined, standard-compliant code that will rely on those weird semantics when signed overflow occurs, and that's a really bad idea.
> > bug-detection tools couldn't flag signed overflows as invalid
> The standard says `if(x = 5)` is well-defined, but that doesn't stop every competently-designed (non-minimal, correctly configured[0]) compiler from spitting out something to the effect of "error: assignment used as truth value".
The big difference in this case is that such errors are easy to determine at compile-time. No compiler would cause such code to fail at run-time, because it would lead to unexpected and unusual program crashes, which would result in both users and programmers to be mad (especially if the code is correct). But for signed overflows, it's not possible to implement a similar compile-time error.
As another example, AddressSanitizer is competently-designed but as soon as you enable `strict_string_checks` you will run into false positives if your code stores strings by keeping a pointer and a length, rather than forcing them to be NULL-terminated, so that flag can be completely useless (and for my current project, it is).
Which is why I'm guessing it's disabled by default. Which means almost nobody uses that flag.
This happens because strings are not actually required to be NULL-terminated in C, even though most people use them that way. So there is code out there (including mine) that relies on strings not always being NULL terminated, and this has well-defined semantics in C.
But of course, as soon as that happens, then you can't rely on the tool to be useful anymore, because there is perfectly fine code which relies on the well-defined semantics.
Note that in this case, "strict_string_checks" is a run-time check, like detection of signed overflows would have to be.
Well, then you have moved the goal post, because initially you didn't ask for performance measurements, you asked for proof that compilers can optimize code better if they can assume signed arithmetic won't overflow.
And the blog post lists dozens of optimizations which can indeed be performed due to that decision.
The benefits of these optimizations will vary depending on the actual code and on the architecture/CPU, of course.
It will be greater for hot and tight loops that benefit from these optimizations and less important for cold parts of the code.
It will also be greater for more limited CPUs and less important for more sophisticated ones.
You could write the same approach in C as `(size_t)i+(size_t)j` without UB. The real reason it doesn't work in C is because a memory region can be large enough to still overflow in that case.
That's not exactly the same approach, because you're doing unsigned addition while the Go code is doing signed addition.
And technically speaking, I think C doesn't guarantee that 'size_t' is at least as large as a 'signed int' (even though this is true on all platforms that I know of), so your approach would fail if that weren't the case. Although, you could use 'ssize_t' instead of 'int', or 'unsigned int' instead of 'size_t' to fix that.
> The real reason it doesn't work in C is because a memory region can be large enough to still overflow in that case.
The Go code we are discussing has nothing to do with memory regions, it's a generic binary search function, so it can be used for e.g. bisecting git commits. It doesn't require the calling function to use arrays.
Although yes, if the calling code were trying to do a binary search on an array, conceptually it could fail, but in that case you could argue the bug would be in the calling function, because it would be trying to pass the array length into a binary search function which only accepts an `int` or `ssize_t` function parameter, which could result in the array length being truncated. But strictly speaking, this would not be an arithmetic overflow issue.
That said, I would just fix the code so that it works for the full 'size_t' range, since the most common use case of a binary search function is indeed to do searches on arrays. In that case, the Go approach wouldn't work indeed.
> I think C doesn't guarantee that 'size_t' is at least as large as a 'signed int'
That doesn't matter, because size_t is large enough to hold any array index (that's kind of[0] the defining property of size_t), so any array index in a signed int can be safely converted to size_t. The real problem is that (using 16-bit size_t for illustrative purposes) if you have, say, x = (size_t)40000 and y = (size_t)50000 into a 60000-element array, x+y = (size_t)90000 = (size_t)24464, which means (x+y)/2 = 12232, which is the completely wrong array element.
0: Technically, size_t is large enough to hold any object size, but array elements can't be smaller than char (sizeof can't be less than 1), so a array can't have more elements than it's sizeof.
> > I think C doesn't guarantee that 'size_t' is at least as large as a 'signed int'
> That doesn't matter, because size_t is large enough to hold any array index (that's kind of[0] the defining property of size_t), so any array index in a signed int can be safely converted to size_t.
Well, the Go code we're discussing has nothing to do with arrays or array indices, so `size_t` doesn't help here.
Go look at the code :) It's a generic function for doing binary search, which accepts an `int` as a function argument, specifying the search size.
The code is then doing:
h := int(uint(i+j) >> 1) // avoid overflow when computing h
Replacing the Go expression `uint(i+j)` with `(size_t)i+(size_t)j` in C like morelisp proposed would not work correctly if `size_t` is smaller than `int`.
Pretty sure that’s not the case for 64 bit systems since you can “only” allocate about 48 bits of address space (maybe slightly more on newer systems).
For 32 bit systems using 64bit instead of size_t would similarly solve the problem.
Well, that's not something the C standard (or POSIX, etc) guarantees, is it?
Conceptually, a 64-bit kernel today could allow your program to allocate (almost) the entire 64-bit address space, assuming it does memory overcommit (like Linux) and/or uses some kind of memory compression (like Linux supports as well).
There might be some MMU limitations on today's mainstream systems, but this doesn't mean that all 64-bit systems have those limitations or that those limitations will remain there in the future.
So your code would break as soon as a new system comes along without those limitations.
Also, this would be even more true if the code and stack would be stored in different address spaces, as theoretically that would even allow you to allocate the entire address space, I think.
The system you describe simply doesn’t exist, standards or no. A 64-bit kernel can’t hand out 64-bits worth of addresses because no CPU built today supports it.
A 48-bit index to an array can represent >240TBytes of RAM minimum - if your records are > 1 byte, you have significantly higher storage requirements. The largest system I could find that’s ever been built was a prototype that has ~160TiB of RAM [1]. Also remember. To make the algorithm incorrect, the sum of two numbers has to exceed 64bits - that means you’d need >63-bits of byte-addressable space. That just simply isn’t happening.
Now of course you might be searching through offline storage. 2^63 bits is ~9 exabytes of an array where each element is 1 byte. Note that now we’re talking scales of about about the aggregate total storage capacity of a public hyperscaled cloud. Your binary search simply won’t even finish.
So sure. You’re technically right except you’d never find the bug on any system that your algorithm would ever run on for the foreseeable future, so does it even matter?
As an aside, at the point where you’re talking about 48-bits worth of addressable bytes you’re searching, you’re choosing a different algorithm because a single lookup is going to take on the order of hours to complete. 63-bits is going to take ~27 years iff you can sustain 20gib/s for comparing the keys (sure binary search is logarithmic but then you’re not going to be hitting 20gib/s). Remember - data doesn’t come presorted either so simply getting all that data into a linearly sorted data structure is similarly impractical.
> The system you describe simply doesn’t exist, standards or no. A 64-bit kernel can’t hand out 64-bits worth of addresses because no CPU built today supports it.
"Today" being the important part. That could change tomorrow. I could implement a 64-bit CPU right now that would support it (on an FPGA). It's not an inherent limitation, it's just an optimization that current CPUs do because we don't need to use the full 64-bit address space, usually.
Also, address space doesn't necessarily correspond 1-to-1 with how much memory there is.
For example, according to the AddressSanitizer whitepaper, it dedicates 1/8th of the virtual address space to its shadow memory. It doesn't mean that you need to have 2 exabytes of addressable storage to use AddressSanitizer, or that it reads or writes to all that space.
As I said, memory overcommit and memory compression (and also page mapping in general, as well as memory mapping storage and storage compression and storage virtualization, etc) allow you to address significantly more memory (almost infinitely more) than what you actually have.
There are other tricks with memory, page mapping and pointers which could break your code if it's not standards-compliant. This could happen for security reasons or because of new compiler or kernel optimizations or new features.
So I agree that this isn't a problem right now, unless you're doing something very esoteric, but if you want to have standards-compliant code and be more future-proof then you cannot rely on that.
There is also the point that the Go code that we're discussing has nothing to do with arrays, memory or address spaces, because it's a generic binary search function that works for any function "f" passed as an argument.
For example, it can be used to do a binary search for finding the zero of a mathematical function (i.e. for finding which value of `x` results in `y` becoming zero in the equation `y=f(x)`) and this has nothing to do with address spaces.
> I could implement a 64-bit CPU right now that would support it (on an FPGA). It's not an inherent limitation, it's just an optimization that current CPUs do because we don't need to use the full 64-bit address space, usually.
You’re hand waving away way too much complexity. Please do build this system. Keep in mind that addressing 63bits of memory with huge tables on will use up > 2 Tera worth of PTEs which translate to what, 16 Terabit worth of memory? This is simply an order of magnitude more than dedicated machines ship with. You’re certainly not getting an FPGA with that.
> For example, according to the AddressSanitizer whitepaper, it dedicates 1/8th of the virtual address space to its shadow memory. It doesn't mean that you need to have 2 exabytes of addressable storage to use AddressSanitizer, or that it reads or writes to all that space.
I think you’re failing to appreciate how large 2^63 bytes is.
> As I said, memory overcommit and memory compression (and also page mapping in general, as well as memory mapping storage and storage compression and storage virtualization, etc) allow you to address significantly more memory (almost infinitely more) than what you actually have.
See point above. Such a system is just not likely to exist in your lifetime.
> but if you want to have standards-compliant code and be more future-proof then you cannot rely on that.
All code has a shelf life. What’s the date you’re working on here? I’m willing to bet it’s not an issue by the end of this century.
> You’re hand waving away way too much complexity. Please do build this system. Keep in mind that addressing 63bits of memory with huge tables on will use up > 2 Tera worth of PTEs which translate to what, 16 Terabit worth of memory? This is simply an order of magnitude more than dedicated machines ship with. You’re certainly not getting an FPGA with that.
The page table is itself stored in virtual memory, is a tree structure and it can be fully sparse, i.e. you only need to populate the PTEs that you use, basically.
Keep in mind, as long as you enable memory overcommit or use MAP_NORESERVE in mmap, you can allocate 127 TiB (~2^47 bytes) worth of virtual address space on Linux x86-64, today, at zero cost. With 4K pages!
In fact, I just did that on my laptop and memory usage has remained exactly as it was before.
And on POWER10 you can map 2^51 bytes of virtual address space today, also at zero cost.
> I think you’re failing to appreciate how large 2^63 bytes is.
No, I do appreciate it. It's 65536 times larger than the maximum address space you can allocate today on Linux x86-64 at zero cost. With 4K pages. Or a factor of 2048 larger than POWER10 can do today, also at zero cost.
In fact, with 1G HugePages, the maximum theoretical number of PTEs needed for 2^64 bytes of address space would be LESS than the number of PTEs needed for the 2^47 bytes you can allocate today, on Linux x86-64, with 4K pages, at zero cost (which I just did, on my laptop).
The maximum amount of virtual address space you can allocate is only limited by how many bits the CPU and MMU are designed to address.
Yes you can allocate that sparsely. So? If you're doing a binary search, you have to touch those pages so the sparseness is pretty irrelevant. Try doing a binary search over a memory space like that and see where you get.
Just to be clear, even if a PTE entry was just 1 pointer long (it's not), covering 63 bits of address space with 1 GiB PTEs would require >73 GiB just for the page tables. And those page tables ARE getting materialized if you're doing a binary search over that much data.
I'm not as imaginative in you to see a world in which you can sparsely map in 2^63 elements (9 exabytes if 1 byte per element) on one CPU and then the problem you're solving is a binary search through that data which is going to cause about log(n) to be mapped in to satisfy the search. 1 exabyte is probably the amount of RAM that Google has collectively worldwide. Now sure, maybe you're talking about mapping files on disk but again. 1 exabyte is a shit ton. It's probably several clusters worth of machines for storage. And even with 1 GiB pages, you're talking about 1 billion PTEs total and each lookup is going to need to materialize ~9 PTEs to search. And all of that is again a moot point because no CPU like that exists or will exist any time soon.
You appear to be correct, though in my defense I didn't give a version and I have definitely been stuck on such a compiler long after 1999. (And I suspect they're still over-represented for 32 bit systems.)
There are still edge cases here - various posters here have mentioned them.
The proper method is to type promote first - not just to unsigned but to a wider variable type - 32 to 64 bits or from 64 to 128 bits. Unsigned simply gives a single extra bit, while erasing negative semantics. Promoting to twice the size works for either addition or multiplication. The benefits are correctness and the ability to be understood at a glance.
Calling binary search and mergesort implementations "broken" does the author no service with his argument. If the key lesson is to "carefully consider your invariants" then the proper takeaway is that binary search and mergesort implementation lose generality with large arrays.
The implementation shown works perfectly for arrays on the order 2^30. Calling them broken is like saying strlen is broken for strings that aren't null terminated.
This implementation works for x < 2^10 and y < 2^10. Arguably this implementation is much worse than the previous one because it fails unexpectedly. At least the previous implementation would be much more obviously broken.
But these are both broken because they don't fulfill the (implicit) contract for add. You can't just say "well, it's implied that my add function only takes inputs that add to 1" unless you actually write that somewhere and make it clear.
I get what you're saying but I don't think they're analogous. If nothing else, strlen is defined only with null-terminated strings; this comes in both the spec itself, as well as the documentation of pretty much every implementation you find. Whereas most binary search implementations don't claim they only work under some particular inputs. (I think there are likely more differences too, but this is sufficient to make my point.)
More generally, I feel like the thought process of "it's not broken if it works fine for inputs that occur 99% of the time" is an artifact of how little attention we pay to correctness, not something that is intrinsically true. If your function breaks for inputs that are clearly within its domain without any kind of warning... it's broken, as much as we might not want to admit it. We're just so used to this happening near edge cases that we don't think about it that way, but it's true.
> most binary search implementations don't claim they only work under some particular inputs
They do implicitly. It's just common sense. When you read a recipe in a cookbook, it usually doesn't mention that you're expected to be standing on your legs, not on your arms. Reader is expected to derive these things themselves.
A lot of generic algorithm implementations will start acting weird if your input size has the order of INT_MAX. Instances this big will take days or weeks or process on commodity CPUs, so if you're doing something like that you would normally use a specialized library that takes these specifics into account.
>> most binary search implementations don't claim they only work under some particular inputs
> They do implicitly. It's just common sense.
That's neither how language specifications work, nor true in this case even if it's true in other cases. Providing one more of the same kind of input that already works is in no way the same thing as changing something totally unrelated.
> When you read a recipe in a cookbook, it usually doesn't mention that you're expected to be standing on your legs, not on your arms.
I don't think this binary search was breaking because of people standing on their arms either.
> A lot of generic algorithm implementations will start acting weird if your input size has the order of INT_MAX. Instances this big will take days or weeks or process on commodity CPUs,
It's incredibly strange to read this from someone in 2022. I don't know of any standard library algorithm that would take "days or weeks" for inputs of size 2^31 now, let alone the majority of them being like this. In fact I don't think this was the case back when the article was written either.
Ok, I looked at it closer and I admit that quicksort implemented in C won't take days on an input of 2³¹ elements. It will take less than 1-2 hours, I think. Something that is a bit worse than O(n log n) or has a 20× bigger constant hidden in O(·) will take days though.
I don't see my other arguments being convincingly refuted, so they still hold.
> Ok, I looked at it closer and I admit that quicksort implemented in C won't take days on an input of 2³¹ elements. It will take less than 1-2 hours, I think.
How ancient is your machine? Quicksorting (2^31 - 16) elements (because that's Java's hardcoded limit) takes < 11 seconds on my machine, and a big chunk of that time is taken in the random number generation to create the input...
# Temp.java
import java.util.*;
public class Temp { public static void main(String[] args) { byte[] arr = new byte[0x7FFFFFF0]; new Random().nextBytes(arr); Arrays.sort(arr); } }
$ javac Temp.java && time -p java Temp
real 10.30
No, wait. The QuickSort from the article is O(n²), in fact. So that one specifically will take weeks, or even months to run — especially in Java. Feel free to test it and get back to me if you think I'm wrong.
> No, wait. The QuickSort from the article is O(n²),
What article are you referring to? This article is about binary search and mergesort, not quicksort?
And which quicksort has O(n^2) typical behavior? That's the worst-case behavior you normally only get on adversarial inputs, not the typical one. (And standard libraries have workaround for that anyway.)
Sorry, I indeed switched to a wrong browser tab and skimmed instead of reading when started this discussion. Please disregard the quicksort discussion.
I still think that it's normal for programs to behave weird when the input parameters are getting close to INT_MAX. Sometimes it's unavoidable. And if it's not specified in the function docs, you should go and check the implementation as a programmer. For binary search it is avoidable, so the criticism of the linked implementation is fair.
What on Earth are you talking about? There's nothing "iamverysmart" about the blogpost at all. The guy literally cites an example where the code broke in production, it isn't an esoteric hairsplitting point at all.
They tried to implement a standard algorithm themselves and failed. Doesn't mean that almost all binary searches are wrong. C++ standard library had a correct implemented binary search with more flexible signature when this article was written, they could just have used that one instead.
This blog post predates r/iamverysmart. There was a way of talking and discourse in 2006 that this is very much as example of. One has to take things from the time they were written.
It’s a clumsy formulation, but if what he means is that you need to be assured that the model you’re proving in accurately reflects the behavior of what is being modeled then he is correct at least sometimes. For example a naive Z3 proof of the mid procedure would be valid since Z3 ints are unbounded. The issue isn’t that the proof is wrong, it’s that the model is.
If the system has a well written formal specification then your model can be built from that without error if done diligently. One real world example is the first Algol 60 compiler, which was built to a formal specification. On the other hand if there is no useful spec or no spec at all then you end up needing to experiment, ie test, and get your model as close as you can.
Grandparent is correct. If you've proven the behavior correct, you don't need to test. The proof is the test. This is usually only true in languages-that-are-proof-assistants (idris). In the cases above, they hadn't actually formally proven the behavior correct.
If instead of 'int' you were to use 'size_t' (or the equivalent of that provided by your programming language of choice), then there should be no issues in practice. Then you would only see overflows if your elements were 1 byte in size, and the input spans more than half of the virtual address space. This is unlikely for two reasons:
1. If you only have single byte elements, you'd better use counting sort.
2. There always tend to be parts of the virtual address space that are reserved. On x86-64, most userspace processes can only access 2^47 bytes of space.
It is unfortunate that the language doesn't have a built-in "average between two ints" function. It is a common operation, people often get it wrong, as shown by this article, and it may have a really simple and correct assembly representation that the compiler may take advantage of.
Such a function, even if it seems trivial, has some educative value as it opens an opportunity to explain the problem in the documentation.
I feel that it’s so simple that many people will overlook that it even exists. In languages that have both, it’s hard for functions to compete with operators. I don’t think that this is the best design to promote correctness.
Maybe, but providing simple functions for "obvious" operations, to promote correctness, make it easier for the compiler, or just for convenience is not uncommon at all. Most languages have a min/max function somewhere, sometimes built-in, sometimes in the standard library, even though it is trivial to implement. C is a notable exception, and it is a problem because, you have a lot of ad-hoc solutions, all with their own issues.
If you look at GLSL, it has many function that do obvious things, like exp2(x) that does the same thing as pow(2,x), and I don't think anyone has any issue with that. It even has a specific "fma" operation (fma(a,b,c) = a*b+c, precisely), that solves a similar kind of problem as the overflowing average.
Knuth’s section on binary search in The Art of Computer Programming is enlightening. One historical curiosity that he notes is that it took something like a decade from the discovery of the algorithm to an implementation that was correct for all inputs.
I briefly tried using binary search as a weeder problem and quickly abandoned it when no one got it right.
Suppose your high, low and mid indexes are as wide as a pointer on your machine: 32 or 64 bits. Unsigned.
Suppose you're binary searching or merge sorting a structure that fits entirely into memory.
The only way (low + high)/2 will overflow is if the object being subdivided fills the entire address space, and is an array of individual bytes. Or else is a sparsely populated, virtual structure.
If the space contains distinct objects from [0] to [high-1], and they are more than a byte wide, this is a non-issue. If the objects are more than two bytes wide, you can use signed integers.
Also, you're never going to manipulate objects that fill the whole address space. On 32 bits, some applications came close. On 64 bits, people are using the top 16 bits of a pointer for a tag.
> Suppose your high, low and mid indexes are as wide as a pointer on your machine: 32 or 64 bits. Unsigned.
Yeah, if you suppose that, you can correctly conclude that you only run into overflow if the object is a byte array that fills more than half the address space (though not the entire address space as you say). And that's why this problem remained unnoticed from 01958 or whenever someone first published a correct-on-my-machine binary search until 02006.
But suppose they aren't. Suppose, for example, that you're in Java, where there's no such thing as an unsigned type, and where ints are 32 bits even on a 64-bit machine. Suddenly the move to 64-bit machines around 02006 demonstrates that you have this problem on any array with more than 2³⁰ elements. It's easy to have 2³⁰ elements on a 64-bit machine! Even if they aren't bytes.
Is it that low and high are both floating point, so you're not constrained by int precision and so you don't get an overflow error. The article makes it sound like sign switching is the issue, but this is just a general overflow problem, right?
The ">>>" operator works, the ">>" operator doesn't. The reason the former works is that it basically performs unsigned division by a power of 2; the latter does it signed. There's no floating-point.
No, it's because the reason that integers overflow is that negative numbers are technically stored as larger than positive numbers in the Two's complement representation most computers use to store integers. Neither low and high are floats.
Example with 8-bit integers (from wikipedia):
Bits, Unsigned value, Signed value
0000 0000, 0, 0
0000 0001, 1, 1
0000 0010, 2, 2
0111 1110, 126, 126
0111 1111, 127, 127
1000 0000, 128, −128
When the logical bit shift is conducted on -128, -128 is treated as an unsigned integer. Its sign bit gets shifted such that the integer becomes 0100 0000, aka 64.
But this is pseudocode. For all you know, it could be implemented in a language whose integers are arbitrary precision, in which case it is perfectly correct and appropriate.
Python, for example, has arbitrary precision integers. That means that it is theoretically possible to represent any whole number in Python, at least assuming your computer has enough memory to support it. Under the hood, the `int` object can have several different implementations depending on how large the number is. So small numbers will be represented one way, and larger numbers might be implemented as 64-bit integers, but very large numbers are implemented as an array of other integers that can grow arbitrarily large. You can think of the array as being like base-10 representation (so 17,537 might be represented as [1, 7, 5, 3, 7]), although in practice much larger bases are used to make the calculations quicker.
Obviously maths with the smaller representations will be quicker than with this array representation, so the interpreter does some work to try and use smaller representations where possible. But if you tried to, say, add two 64-bit signed ints together, and the result would overflow, then the interpreter will transparently convert the integers into the array representation for you, so that the overflow doesn't happen.
So the first poster said that the default merge sort implementation on Wikipedia was buggy, because it doesn't protect against overflows (assuming that the implementation used fixed-sized integers). The second poster pointed out that if the implementation used these arbitrary precision integers, then there is no chance of overflow, and the code will always work as expected.
You can look up "bigint" which seems to be the term of art for implementations of arbitrary precision integers in most languages. You can also read a bit about how they're implement in Python here: https://tenthousandmeters.com/blog/python-behind-the-scenes-...
> Python, for example, has arbitrary precision integers.
In the spirit on nitpicking on edge cases:
It does, but quiet often you pass a number to some C library (other than stdlib) and C does not honour this arrangement.
This means that the instead of fixed width integer types that have a finite maximum due to being 32-bit or 64-bit etc…, the language could use an integer type that can grow to be as many bytes as is needed to store the number. This is called a BigInt in JavaScript for instance.
Python3 doesn't have a maximum integer and therefore cannot experience overflow when adding two integers, for example. You can keep adding one forever.
I'm aware (plus the fact that the algorithm is correct in Python). It's very unlikely that this is an argument I can win.
I'm taking a pragmatic perspective: like it or not, people are going to skim the article and copy & paste the pseudocode.
Given that the pseudocode is buggy in the vast majority of programming languages and the user isn't informed about this in the pseudocode, it's going to lead to unnecessary bugs.
Spoiler: If you are using Javascript, this bug only affects you if your arrays have more than Number.MAX_SAFE_INTEGER/2 entries, which is about 2^52. In other words, don't waste your time with fixing this bug.
Unless you're binary searching something other than a data structure. Fascinatingly, binary search works just fine in optimization problems where the function to optimize is monotonic.
Anyone dealing with arrays containing a billion elements or more really ought to be using 64 bit arithmetic to avoid problems like this. Certainly better to do this the right way though.
This is a great example of how good algorithms are software plus hardware. The idea that a pure mathematical idea can be naively implemented on any hardware has never truly materialized.
Yes, we are a long way from flipping switches to input machine code, but there are still hardware considerations for correctness and performance, e.g. the entire industry of deep learning running somewhat weird implementations of linear algebra to be fast on GPUs.
Oh boy, in 2022 you could not afford writing a broken binary search in any serious coding interview. Back before 2006 apparently PhD students in CMU could not.
Are you kidding? If you were asked in a coding interview to write a binary search, and you wrote the broken version in the post on a whiteboard, you'd be in the top 5% of applicants. Most applicants can barely write a for loop on the board.
Please don't bother with posts like this. They don't add anything useful to discussion, and are against site guidelines:
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
I'd put the blame on languages that don't allow exceptions, and whose return value in case of errors belong to the same domain as the solution.
I've coded binary searches and sorts tons of times in C++, and yet none was succeptible to this bug. Why? Because, whenever you're talking indices, you should ALWAYS use unsigned int. Since an array can't have negative indices, if you use unsigned ints the problem is solved by design. And, if the element is not found, you throw an exception.
Instead, in C you don't have exceptions, and you have to figure out creative ways for returning errors. errno-like statics work badly with concurrency. And doing something like int search(..., int* err), and setting err inside of your functions, feels cumbersome.
So what does everyone do? Return a positive int if the index is found, or -1 otherwise.
In other words, we artificially extend the domain of the solution just to include the error. We force into the signed integer domain something that was always supposed to be unsigned.
This is the most common cause for most of the integer overflows problems out there.
When you’re talking indices, you should NEVER use int, unsigned or not. The world is 64-bit these days and int is stuck at 32 bits almost everywhere. And even on 32-bit systems indexing with unsigned int may not be safe unless you think about overflow, as this bug demonstrates (at least unsigned overflow is not immediate UB in C and C++ like signed overflow is…)
To be fair, size_t doesn't solve this particular problem; you also need to use correct array slice representation (ptr,len) not (start,end), and calculate the midpoint accordingly (ie (ptr,len/2) or (ptr+len/2,len-len/2)).
(And because C doesn't mandate correct handling of benign undefined behavior, you still have a problem if you `return ptr-orig_ptr` as a size_t offset (rather than returning the final ptr directly), because pointer subtraction is specified as producing ptrdiff_t (rather than size_t), which can 'overflow' for large arrays, despite that it's immediatedly converted back to a correct value of size_t.)
The problem is not solved by using unsigned ints though, because it stems from integer overflow. I'm afraid your implementations are, alas, also incorrect.
The type that’s meant for indexing in C and C++ is called `size_t`. It is pointer-sized. In Rust it’s called `usize` and Rust does not have implicit conversions, so if you accidentally use too narrow an integer type to compute an index, at least Rust forces you to add an explicit cast somewhere.
No. This article explicitly mentions the "int" type, which in C, C++, and Java is 32 bits long. 32-bit ints are not large enough for this purpose: they can only index 2 billion items directly (which will overflow a lot given that a standard server now has 256-512 GB of RAM), and this average calculation hits problems at around 1 billion items. Overflows on 64-bit ints (when used to store 63-bit unsigned numbers) are not going to happen for a very long time.
Wasn't Array.length 32-bit on Java when the article was written? In fact, isn't it 32-bit even now?
Moreover I don't see how you deny that using signed would lose functionality in this case—it's pretty undeniable that it gives the wrong answer in cases where unsigned would give the correct answer; the code is right in the article and you can test it out. This is true irrespective of any other cases that neither might handle correctly (assuming you believe any exist, but see my previous paragraph).
I think it'd be nice if you give some examples of how using unsigned integers for indices breaks code in cases where signed integers don't, because otherwise your comment is very unilluminating.
you need to google why size_t exists.
size_t guarantees you to be able to represent any array index. unsigned int can be arbitrarily tiny and a bigger loop may cause unsigned overflow during your loop on some architectures. in other words, your code would be broken. size_t will make it work correctly everywhere. it is the correct way of representing array indices.
In particular, (x + y) / 2 is the wrong implementation of midpoint in general, because it would fail to even compile on objects you can't add together. But midpoint is well-defined on anything you can subtract (i.e. anything you can define a consistent distance function for)—and it doesn't require addition to be well-defined between those objects!
One obvious (in C/C++, and not-so-obvious in Java) counterexample here is pointers/iterators. You can subtract them, but not add them. And, in fact, if you implement midpoint in a manner that generalizes to those and respects the intrinsic constraints of the problem, you end up with the same x + (y - x) / 2 implementation, which doesn't have this bug.