Hacker News new | past | comments | ask | show | jobs | submit login

How so? Are you talking about (high + low)/2? Whether that is correct depends on the language. With 64 bit arithmetic it's correct in practice, and with arbitrary size arithmetic it's correct in practice and in theory. It only really causes problems if you're using 32 bit arithmetic on a 64 bit machine.



Just keep telling yourself that.

Or, look up the right answer.


Perhaps instead of being mean, you could be helpful and post a link to said answer.


With 64 bit variables it probably works. But for less here is the correct implementation: https://ai.googleblog.com/2006/06/extra-extra-read-all-about...


This implementation:

    mid = ((unsigned int)low + (unsigned int)high)) >> 1;
essentially extends the bit width from 31 bit to 32 bit by going from signed to unsigned, and is thus correct up to array lengths of order 2^31 rather than 2^30. Using an even larger bit width should classify as at least as correct as this.

In summary,

(low + high)/2 or (low + high) >> 1 is correct up to 2^30 for 32 bit signed and up to 2^62 for 64 bit signed, up to 2^31 for 32 bit unsigned, and up to 2^63 for 64 bit unsigned.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: