Hacker News new | past | comments | ask | show | jobs | submit login

I guarantee that I would confuse the types "byte" (uint8_t) and "octet" (int8_t). The typical distinction between a byte and an octet has to do with the number of bits in the representation (a byte usually has 8, an octet always has 8). I don't know of any convention for bytes being unsigned and octets being signed.



You're right that with "byte" there isn't an official size specification, although the de facto size is 8 bits, unlike with "octet", which was specifically defined as 8 bits (for interoperability between different systems).

Regarding the question of signed/unsigned - I'll try to explain:

byte - unsigned

On page 37 of the C99 standard: "A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to 2^CHAR_BIT - 1)"

i.e. according to the C99 standard, a byte is unsigned.

octet - signed

Think of an octet in two ways: the concept of something that is exactly 8-bits on the one hand, and on the other hand, the technical representation of this concept.

When you read the literature you'll notice that an octet refers simply to the size of something (8 bits) and not is signedness. For example, octets arguably arose in the networking world, and the NDR (Network Data Representation) refers to octet in sign-neutral way.

On page 256 of the C99 standard: "The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two’s-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits."

Now, how would you go about representing the concept of an "octet" (which is sign-neutral)? If you used an unsigned 8 bit integer, you can't represent the sign of the (conceptual) octet, while a signed 8 bit type can.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: