In general, trying to perform operations which truncate on negative numbers is fraught with peril, because everyone seems to do it differently and you need to look it up. Another one that tends to bite people is the modulus operator where either operand is negative: programming languages tend to differ on what the result should be. For example:
Modern language standards are explicit about what the operations do and modern architectures all have two variants of right shift (arithmetic and logical, in x86 parlance) to handle the difference. This isn't a problem in practice for compilers in the modern world, though it remains a good warning for developers writing their own optimizations.
Really it's a note from a world we've forgotten, where "all" computer operations were unsigned and 2's complement math was a clever trick deployed only occasionally.
It's true the 2s complement wasn't always universal but von Neumann proposed it in the EDVAC report (1945). It was the IBM 360 (1964) which really championed it. The PDP-8, 9, 10 and 11 all used 2s complement. By 1976 when this MIT AI Lab report was written, 2s complement was pretty standard.
I think this is just the MIT AI Lab tilting at windmills. They had their quirks like wanting the world to call pixels pels well after the world had decided on the former.
> Dividing -1 by 2 gives a quotient of 0 and a remainder of -1 (See the DIVMOD routine if you want to check this out). But shifting -1 right by one bit gives -1. [1]
Yes, this is because asr/and rounds toward negative infinity (as it should), while your DIVMOD (also round toward -inf) routine incorrectly[0] implements QUOREM (round toward zero).
0: Well, you could say that it's rather the name that is incorrect, since I gather it's intended to implement quo/rem rather than div/mod.
There is no standard that says quo/rem or div/mod should mean one or the other. And in fact there are more than just two choices for how to implement these. See e.g. https://en.wikipedia.org/wiki/Modulo_operation