
A Summary of the 80486 Opcodes and Instructions (1999) - Tomte
http://www.dabo.de/ccc99/www.camp.ccc.de/radio/help.txt
======
userbinator
The Intel (and AMD) x86 manuals strongly hint at the octalness of the
encoding, but don't explicitly mention it; I seem to remember some of the
documentation for the 8080/8085 did summarise the instructions in octal, since
they were also encoded similarly:

[http://www.righto.com/2013/02/8085-instruction-set-octal-
tab...](http://www.righto.com/2013/02/8085-instruction-set-octal-table.html)

[http://www.z80.info/decoding.htm](http://www.z80.info/decoding.htm)

But although frequently described as irregular, the encoding of the x86 and
its predecessors are still far more regular than something like a VAX:

[https://www.cs.auckland.ac.nz/references/macvax/op-
codes/VAX...](https://www.cs.auckland.ac.nz/references/macvax/op-codes/VAX-
Opcodes-num.html)

~~~
aap_
No, VAX instructions are much easier to understand. You have one byte of
opcode which specifies the operation in it's entirety which is followed by an
adressing mode byte and (depending on addressing mode) an immediate value for
each operand. It's not quite as neat as the PDP-11 instruction set but still
pretty nice. x86 otoh is just insane imo.

------
akita-ken
This kind of summary is useful for students (like me) who're taking computer
systems courses and have been taught in MIPS but want to get a primer into
x86.

------
faragon
Interesting, explaining the encoding of x86 opcodes (opcodes encoded in octal,
addressing modes, etc.).

------
poseid
how do octals compare to bytes and today's machines ?

~~~
jasonm23
You are probably confusing octals with octets.

Octals (in the current context) is simply a way of referring to the fact the
opcodes are expressed in octal values (base 8), as opposed to hex (base 16) or
binary (base 2).

Octet on the other hand is simply an alternative word for byte ( is no longer
used and is somewhat archaic.) They are in no way different to bytes and are
just another way to refer to 8 bits.

Octal used to be a lot more popular before the 80s and most languages still
interpret numbers with a leading zero as an octal value.

~~~
notalaser
"Octet" is still commonly used and in no way archaic. In fact, most standards
use "octet" instead of "byte" because, despite IEC having said otherwise in
IEC 80000-13, most other references still define the byte as the smallest
addressable unit that can represent any member of the host system's character
set.

Notably, C99 (and its predecessors) follow this convention. Owing to history,
and the curiosities and resolution constraints of fixed-point arithmetics, a
lot of devices with a DSP in it think a byte is anything but 8 bits.

There used to be systems that used 5, 6 or 7 bits for a byte, too, but as far
as I know, most of those really have gone the way of the Dodo. _However_ ,
since most of those systems were used in fields like telecom (and they weren't
only what we'd call "computers" nowadays), "octet", rather than "byte", is
still commonly used in virtually every networking-related context.

~~~
jasonm23
True, but I think calling this usage archaic is still ok(ish) unless we are in
pedantic mode, and of course sometimes we must.

Since you say yourself, non-8 bit bytes are no longer at large.

I agree certainly that in networking and when dealing with just legacy ASCII,
byte is an imprecise term.

~~~
notalaser
I think I was a bit too careful not to hyperbolize and I ended up doing the
opposite. Non-8-bit bytes are still very common in terms of numbers. For
instance, SHARC DSPs, (unfortunately...) one of the most popular DSP family,
operates with 32-bit bytes, if my (repressed) memory serves correctly. I
wouldn't be surprised if there were a lot, lot more deployments of such
systems than iPhones. Same goes for telecom and friends. There are a lot of
such systems, I'm mentioning SHARC because it's the one I programmed most
recently and can probably still do a decent job answering questions about it.

Given the different programming : manufacturing ratio of these systems, I'm
not surprised that they see less exposure in the programming community, but
it's not a matter of pedantry, it's really a matter of correct use and not
being creeped out when the compiler insists that sizeof(char) and sizeof(int)
are both 1, on a 32-bit machine.

