Lots of people have a strong opinion on this: Ones that say that 0-based indexing is the only logical way of things, and ones that don't see what the fuss is about and prefer 1 because that's how we count objects. I suspect the people in the first category did some low level programming that needed to do arithmetic on indices. For example, let's say you want to take a string "abc" and repeat it until the length is 10, getting "abcabcabca". Assuming some Python-like language you would start with:
a = "abc"
b = [" "]*10
With 0-based indexing you would do:
for i in range(0,10):
b[i]=a[i%3]
In a 1-based language that becomes:
for i in range(1,11):
b[i]=a[(i-1)%3+1]
So you need to shift the index twice. This is because modulo arithmetic needs 0 to form a ring. As a result, in situations where the difference between 0 and 1 based indexing makes a difference it's ususally 0-based indexing that leads to simpler code.
Julia actually just includes a dedicated function for this case (mod1), and that covers the vast majority of places where 0 base is easier.
In my cases of numerical code 1 based indexing causes fewer ±1 than 0 based indexing. (Nitpick: In modulo arithmetic 0 = N so having the index run from 1 to N is a ring just as well. The problem is the conventional representation of the equivalence class -N,0,N,2N,3N… in the numbers being 0 (which arises from arithmetics definition of modulo))
In pretty much any language other than python you'd have to be careful for the case that i might become negative anyway.
And in python you could just go:
import itertools as it
a = ''.join(it.islice(it.cycle("abc"),10))
Also you don't need 0 to form a ring. Modulo arithmetic forms a ring no matter which representatives you pick. Languages just tend to implement one that includes 0 because it's more convenient.
This used to be unspecified behavior. C99 then codified this wrong behavior.
Assume positive b for a moment.
We want a * (a/b) + (a%b) = a. If a%b is to always be within [0..b), then a/b has to round toward -infinity. C99 instead chose round towards 0.
That's right. Why do you suppose they chose that behavior to standardize, rather than Python's? Conceivably it's because nobody on the C99 standards committee had enough technical expertise to make the argument you're making, but can you think of another explanation? Because the prior probability on that one is pretty low.
> In C89, division of integers involving negative operands could round upward or downward in an implementation-defined manner; the intent was to avoid incurring overhead in run-time code to check for special cases and enforce specific behavior. In Fortran, however, the result will always truncate toward zero, and the overhead seems to be acceptable to the numeric programming community. Therefore, C99 now requires similar behavior, which should facilitate porting of code from Fortran to C.
Ha - you’re correct of course. I saw the a^2 term and thought that can’t be right. Note to self - before attempting to correct others, check the “correction”.
According to wahern's quote above it was to "avoid incurring overhead in run-time code to check for special case", like the ability to accelerate mod power-of-two by convertig to AND.
In two's complement? You'd need to add extra masks and shifts to replicate the sign bit. It puts you in much more complex territory than throwing an AND at it.