Regarding the question of signed/unsigned - I'll try to explain:
byte - unsigned
On page 37 of the C99 standard: "A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to 2^CHAR_BIT - 1)"
i.e. according to the C99 standard, a byte is unsigned.
octet - signed
Think of an octet in two ways: the concept of something that is exactly 8-bits on the one hand, and on the other hand, the technical representation of this concept.
When you read the literature you'll notice that an octet refers simply to the size of something (8 bits) and not is signedness.
For example, octets arguably arose in the networking world, and the NDR (Network Data Representation) refers to octet in sign-neutral way.
On page 256 of the C99 standard: "The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two’s-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits."
Now, how would you go about representing the concept of an "octet" (which is sign-neutral)? If you used an unsigned 8 bit integer, you can't represent the sign of the (conceptual) octet, while a signed 8 bit type can.