> The word size is some multiple of the byte size. I’ve been confused about this for years, and the Wikipedia definition is incredibly vague
It is extremely confusing. A "word" means different things depending on architecture, programming language, and probably a few other things that make up the context.
For example, in the x86 assembly world you usually call an (usually aligned) 16 bit value a "word", a 32 bit value a "double word", and a 64 bit value a "quad word", all because the original 8086 had 16 bit wide registers that could be alternatively be used as two 8 bit registers.
In other architectures or more generally other contexts, a word often relates to "how wide the bus is". But even then some of the old nomenclature bleeds into newer things that may have gotten wider busses, and it's not even always clear what bus is being referred to.
Basically, unless you operate within a certain shared context, the word "word" is fuzzy and ambiguous. Maybe not quite as much as the word "object", but it's definitely up there somewhere.
Yes, that's correct. The word "word" is ambiguous and has a few meanings. Where I'm from, the "word" is the unit you can load/store to memory in a single instruction. So if you can load a 32-bit quantity, that's your word size. The byte is the address granularity, while the word is the access granularity.
Because of backwards compatibility, and because CPUs are far more flexible about loading more memory sizes than they traditionally have been, the meaning of "word" doesn't really matter anymore. In the context of Win32 and x86 programming, a WORD is 16 bits, a DWORD ("double word") is 32 bits, and a QWORD ("quad word") is 64 bits. That's most likely where you'll see it these days.
The reason why word size is so confusing is that we started sharing architectures (partially or fully) across multiple CPU designs, baking in the word size of whatever CPU design was first. And even Architectural word size has fallen out of modern use.
CPU word size for individual CPU designs much easier to define. It's not really the bus size, though it was strongly correlated until caches and bus-multipliers became a thing (For example, the PowerPC 601 has 32bit words (and 32bit registers) despite having a 64bit bus)
Word size is the native operation size. The size of integer you should use in your code for the best performance. Which is why the 68000 has a 16bit word size despite having 32bit registers, because instructions for 32bit operations took a few cycles longer.
That's a windows issue. When they say dword, they actually mean 32bit. It's just a type alias.
I assume you will find the same issue on the Window NT ports to MIPS and PowerPC.
Where you see dword as 64bit on those platforms is their documentation and assembly syntax. Most software in higher level languages (that aren't windows) avoided naming their type alias as WORD, DWORD and QWORD.
TIL that word/dword/qword comes from x86 assembly. I always thought these were invented by Microsoft as part of win16 and then carried into win32 for source-level backwards compatibility.
It is extremely confusing. A "word" means different things depending on architecture, programming language, and probably a few other things that make up the context.
For example, in the x86 assembly world you usually call an (usually aligned) 16 bit value a "word", a 32 bit value a "double word", and a 64 bit value a "quad word", all because the original 8086 had 16 bit wide registers that could be alternatively be used as two 8 bit registers.
In other architectures or more generally other contexts, a word often relates to "how wide the bus is". But even then some of the old nomenclature bleeds into newer things that may have gotten wider busses, and it's not even always clear what bus is being referred to.
Basically, unless you operate within a certain shared context, the word "word" is fuzzy and ambiguous. Maybe not quite as much as the word "object", but it's definitely up there somewhere.