Hacker News new | comments | show | ask | jobs | submit login

I didn't downvote you. However, I am a math person and your post feels wrong for some reason. It isn't wrong, but it feels like it should be; and I am not entirely sure why.

CPUs use modular arithmetic for both signed (two's complement) and unsigned values, so that cannot be the defining feature of twos complement. The insight with two's complement is that we can interperet the "upper" half of numbers as negative. As you say, this should be very familar to anyone who has worked with modular arithmatic. For those who have not, this just means that (in 3-bit twos complement/ integers mod 8). We interperate 7 = 8-1 = -1 and so on. As a result of this, operations on signed and unsigned integers are literally identical. Almost.

CPUs differ from modular arithmetic in 1 way: multiplication. Specifically, CPUs do not do modular arithmetic for multiplication. However, the result of the multiplication of two n-bit numbers could be as big as 2n-bits. When CPUs do this multiplication, they store the result in two registers. If you only look at the bottom register, the result is equivalent to modular arithmatic. However, if you consider the top register, the result is not (this is why there is a MUL and IMUL instruction).

Arguably, the same thing happens for addition. However, because there is at most 1 extra bit needed, it is not given its own register, but rather, the upper digit is stored in the carry flag.

The other insight of two's complement is encoding. Under the construction I presented above, if given the 3-bit twos complement number 0b111, we would have to compute that:

    0b111 > 0b011
    0b111 = 0b1000 - 0b0001
The insight of two complement is a way performing this computation using primitive bitwise operations. Specifically:

   If the first bit is 0, we are done
   Otherwise, perform bitwise negation, add 1, and consider the result "negative". 
There is no obvious equivelent of this method for other bases.





> CPUs use modular arithmetic for both signed (two's complement) and unsigned values, so that cannot be the defining feature of twos complement

I disgree, because this still fits nicely into modulo arithmetics.

Signed versus unsigned just means that we choose a different set of representants for certain operations (such as). For unsigned, we use the smalles non-negative representant. For signed, we use the representant with the smallest absolute value (and prefer the negative one if there is a tie). Still, nothing with binary.

Except for one single detail: The "tie" is solved in favour of the negative number, because that way, the first bit always denotes the sign. This little details is binary-specific, but that's it.

> CPUs differ from modular arithmetic in 1 way: multiplication. Specifically, CPUs do not do modular arithmetic for multiplication. However, the result of the multiplication of two n-bit numbers could be as big as 2n-bits. When CPUs do this multiplication, they store the result in two registers. If you only look at the bottom register, the result is equivalent to modular arithmatic.

Good point, but in most (non-assembly) code that I see, the result of a multiplication is stored in a same-size integer. So I'd argue this is used as much. I agree that this is still a difference, though.

> The insight of two complement is a way performing this computation using primitive bitwise operations.

I believe that negation is not what this is all about. To the contrary, the negation is more complicated for two's complement than for other representations. For example, in other representatios you just flip a single bit to negate a number.

No, the point is that there no special cases for increment, decrement, addition, subtraction and multiplication (with the small difference discussed above). And there is not even a difference between signed versus unsigned arithmetics except for comparison (also discussed above). This is what works out perfectly well in modulo arithmetics, and has nothing to do with binary.


"Two's Compliment represents a linear automorphism over a cyclic group" is what I think you're trying to say. russdill & co are hinting that Modular Arithmetics are cyclic groups.

I think for most people, the aha-moment comes when they realize One's Compliment double-counts the zero. In contrast, linearity is simply assumed. Otherwise, why would anyone make a number system out of it?



  > However, the result of the multiplication of two n-bit numbers
  > could be as big as 2n-bits.
For unsigned multiplication. For signed two's complement multiplication, it's at most 2n−1 bits, and that is only when multiplying the two most negative numbers; otherwise it's at most 2n−2 bits.

~~~ wavy line flashback ~~~

A few years ago, I did a C code generator for a 2's complement 16-bit processor that had mostly adopted the convention that 0x8000 was an invalid sentinel value, so that integers would have a symmetric range around zero, ±0x7FFF. This was mostly a software convention (i.e. the ALU didn't trap or anything like that) except that, given that 0x8000 was 'unused', the multiplier only produced a 30 bit result. This made C 'long' multiplication interesting.


Another way to do signed comparisons (I think) is to simply invert the top bit and do an unsigned comparison. For the 3-bit example this is equivalent to adding 4 to both numbers which will shift the range from (-4,3) to (0,7).

So in hardware this amounts to routing one bit of the instruction to be XORed with the sign bits of the operands. You then get signed and unsigned comparison. It's these trivial hardware implementations that made 2's complement the standard way of doing things. using 2 gates to implement a variation on an instruction is awesome.

edit: it would not surprise me if there is an even simpler way to do it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: