Yes, there are two zeros. Not a problem in practice because all arithmetic operations normalize -0 to +0 at no extra cost in execution time, so -0 practially doesn't happen. IIRC from Assembler class, addition is implemented as subtraction of the negative operand. Just in case anyone ever needs it, e.g. for bitmaps, the SZ (store zero) assembler instruction is complemented by a SNZ.
Programming in a high level language, the representation of negative numbers is all but transparent to the programmer. When reading dumps, not having to perform an extra addition (subtraction?) when changing a number's sign is pretty sweet! Also, abs(-MAXINT) == (+MAXINT), reliably. The asymmetry of number ranges always bothered me on 2's complement machines.
What _is_ annoying about the UNISYS boxes is the 36 bit word format, though. Characters are stored in 9 bit quarterwords that map pretty awkwardly to bytes containing 8-bit ASCII. Binary data formats are essentially incompatible with anything.
This is why the FTP protocol has a byte size command. If all you have is 8-bit bytes then that seems strange. But at the time FTP was designed the most common machines on the ARPANET had 36-bit words (mostly PDP-10s and their derivatives) and bytes (the term was used in the more general sense) were just bit strings of 1-36 bits. 7-bit ascii was common (5 characters would fit in a word, like my username GUMBY), as were six bit bytes (pack six characters into a word). I never used 9-bit characters though arrays of nine-bit bytes were not unreasonable.
BTW the PDP-10 had 18-bit addresses so each word of memory held a Lisp cons; CAR, CDR, RPLACA etc were machine instructions. Gordon Bell and Alan Kotok designed the -10 (and its predecessor the PDP-6) with Lisp in mind. The first Lisp Machines.
> Binary data formats are essentially incompatible with anything.
Well, that's true today, but look at it the other way around: Unix was really developed for an 8/16-bit machine. It was a reimplementation of Multics that ran on a 36-bit machine (GE 645 & Honeywell 6180) written in PL/1. Unix was famously written for the PDP-7 (an 18-bit machine) but it was written in assembly. The famous PDP-11 version was written in a BCPL derivative you might have heard of called "C" and, since PL/1's level of machine abstraction was still new, the derivative modeled the PDP-11 architecture. So nowadays all CPUs are C machines and C runs well on them. Probably the most common non-PDP-11-like machine most programmers will program these days is a GPU.
There were a bunch of different six bit character encodings, often (though not always strictly correctly) called "BCD". The horror show of IBM's EBCDIC was an eight bit extension of one of these.
Then there was 5 bit Baudot code, and...
The last time I checked, many *nix systems will still assume that you're on a 5 bit Baudot (uppercase only) teletype (i.e., a genuine physical tty) if you attempt to log in using all uppercase in your user name.
Some systems hacked in more characters by having special "shift in" and "shift out" characters. If a "shift in" character appeared in the stream, the system would switch to the alternate character set until a "shift out" character was received.
> … *nix systems will still assume that you're on a 5 bit Baudot (uppercase only) teletype …
It's not like the 8 bit byte was the initial default and others were experiments.
FWIW I believe the 8-bit byte was an IBMism, and for whatever reason I don't remember IBM machines being particularly popular on the ARPANET, which was a research network.
Although its arrival well predated me I do remember a conversation with someone in which we were surprised by how it was becoming common to see people assume that a byte was a fixed 8 bits. I think that was entirely due to the spread of the Vax.
The IBM 360 (1964–) was the killer 8-bit-byte machine.
The Stanford PDP-6 apparently even had a CONS instruction at opcode 257 :)
So I've not worked with 1's complement machines, but how do you add a negative number to a positive number?
For instance, 5 + -3 in one's complement would naively be:
000001 instead of 2
So the "real" implemented instruction was subtraction, while addition wasn't directly implemented. The "fixup", meanwhile, is just flipping all the bits, and this can be achieved, I suppose, more quickly than adding-with-carry over 36 bits.
This sounds awesome. I had no idea UNIVAC is still alive.