for 64 bit size_t it's pretty academic, but no, not correct.
suppose low is M-3 and high is M-1 (where M is 2^[#bits]) then mid should be M-2. but (M-3 + M-1) is (modulo M) equal to M-4. and half that is M/2 - 2, which is a lot less than the correct M-2. (pretend it's 2 bit unsigned ints so M is 4)
the correct 'mid' computation is low + (high - low)/2.
So the article isn't wrong, but its misleading you into thinking this should be correct for the wrong reasons.
They are using signed integers as their indices, which means that the signed bit is always 0. Thus the addition after casting to unsigned will never overflow, and you can divide by two (shift by 1) and then recast to a signed integer, no harm no foul.
Sure, and if you expect your software to run on such a platform with any degree of confidence then you're right to consider it. Even better, tell us about a conforming implementation that you've used in product recently that has UINT_MAX == INT_MAX.
Also I'm not trying to mislead people into thinking that its a good way to implement this. The confounding bit from the article is they started in Java and ended up in C. If you were indexing with signed ints in C, C++, or any language that has unsigned integers then you already have a bug with or without the bad mean check.
They didn't say unsigned overflow was undefined, but you're right that the offset won't be correct. I thought it seemed intuitively wrong to me, but I figured google research is probably a reliable source!