
The origin of zero-based array indexing - fizwhiz
http://exple.tive.org/blarg/2013/10/22/citation-needed/
======
mtdewcmu
I don't get why zero-indexing requires so much justification. Here are two
powerful reasons why it makes sense:

    
    
        * Zero is the first unsigned integer. If you start at 1, you're wasting one
        * p[0] == *p, and it makes no difference whether pointer arithmetic had been invented, because pointers are just memory addresses, and that's what the cpu works with
    

The best reason for one-indexing:

* Some people think it's less confusing

>> Over and over again tools meant to make it easier for humans to approach
big problems are discarded in favor of tools that are easier to teach to
computers, and that decision is described as an inevitability.

That's exactly how it should be. What's easier for humans is subjective.
What's easier for computers is objective, and going with the grain of the
machine tends to yield solutions that work, whereas going against the grain
tends to cause over-complicated and buggy implementations. That's the true
take-home of "Worse is Better." The Unix approach to syscalls worked, and Unix
is alive and well. The MIT approach was quite possibly impossible to
implement, and the system they were building is extinct. If you want to use a
computer well, you have to learn a little about computers and set aside your
preconceived ideas. What's so unreasonable about that?

~~~
jebus989
"Some people" is understated; if you're holding one orange and I ask for your
first orange then you give me that one, in your first hand. I'm over-
emphasising but the point is that no zeroes are involved.

It's a trade-off, if we were bending entirely to the computer's will we'd all
be programming machine code, whereas there's overhead in using more natural
language. Honestly I've bought this 0-indexing is the Right Way for a long
time but the more I use R the more a 1-index seems to make sense for high-
level languages.

~~~
lucideer
An index is the beginning of a unit - the point at which it is found, where it
begins. The index doesn't represent the unit itself, just where it is located.

Your orange takes the zeroth index becoming the first orange:
[http://i.imm.io/1lWRO.png](http://i.imm.io/1lWRO.png)

People understand the concept of zero perfectly. They know the day doesn't
begin at 1am, they know babies aren't born 1 years old. This isn't a computer-
specific concept in any way.

~~~
adam-a
> They know the day doesn't begin at 1am, they know babies aren't born 1 years
> old. This isn't a computer-specific concept in any way.

This is true, and an interesting point, but also nobody describes a baby as
being zero years old, or the time as being zero-o'clock. People avoid zero in
these situations and find alternatives - i.e. 3-months or 12.30am.

Personally I see both sides but the point is that whenever you make something
divisible, the first unit is 1 and that's what people call it.

~~~
tmoertel
Actually, in the US military, it's common to refer to the first hour of the
day as zero. As in, "zero hundred thirty hours" for 30 minutes after midnight.

~~~
mmorris
That the clock starts at 12 has annoyed me for quite a long time (i.e., 12PM
comes before 1PM). I kind of wish that non-military usage also used
0:00-11:59, if only for the logical consistency of it.

Alternatively we could just switch 12PM and 12AM, but that would mess with
midnight == start of the day.

Unfortunately I think I'm doomed to suffer through this inconsistency, since
it doesn't seem to bother anyone else.

~~~
serans
Why not use a 24 hour clock then? In Europe, 24 hour (digital) clocks going
from 00:00 to 23:59 are commonly used. My French friends even use it even when
speaking (I admit is kind of weird to hear "see you at sixteen thirty"), while
in other countries such as Italy they use 24 hour clocks as well, but they
"translate" it when speaking.

~~~
mmorris
Yeah, I think that would be better.

I guess for me personally the perfect solution would be a 12-hour clock that
operates on the same principles as the European 24-hour (which I think is the
same as the military clock?). So 00:00 to 11:59, twice per day.

But honestly my whole argument is pretty academic, so I'll probably just put
up with the clock starting at 12.

------
anonymouz
I don't get it: The quote by Dr. Richard essentially says that the reason to
do it this way in BCPL is because they are offsets, and it would have been
unnatural to subtract 1 of them.

I can't see anything about the computational cost of the indexing in there:
Does the author have another source for that? It doesn't seem to be supported
by what he quotes. The anecdotes about the IBM 7094 are nice, but I don't see
much evidence for the kind of connections the author claims.

~~~
moogleii
I was inclined to disagree due to the author's strong arguments and precise
use of language, but after re-reading the part around Dr. Richard's comments,
I do think the author made a bit of a logical leap.

It went from "I'm Dr. Richard, and BCPL supports pointer arithmetic starting
from zero" to "I'm the author, and the IBM 7094 handles all the pointer
arithmetic during compile time, and time is limited; therefore, Dr Richard
added zero-based indexing to speed up compile time." When really, his
intentions are assumed by the author. Dr. Richard sounds like a very pure
computer scientist. He may have added that feature without consideration of
when pointer arithmetic is calculated simply because he wanted it in his BCPL
language.

Really, Dr. Richard seems to sum up the issue nicely, "I can see no sensible
reason why the first element of a BCPL array should have subscript one."

Also, wasn't the original question why zero vs one, not, why we started using
pointer arithmetic at all?

------
ingenter
There are other interesting reasons why it is better to use zero-based
indexing. In some algorithms you need to access n-th element modulo something.
For example, to fill an array of arrays, in zero-based array

    
    
        for (i = 0; i < width*height; i++) {
    
           arr2d[i / width][i % width] = array[i];
    
        }
    

If you use one-based arrays, you have to write

    
    
        for (i = 1; i <= width*height; i++) {
    
           arr2d[(i-1) / width + 1][(i - 1) % width + 1] = array[i];
    
        }
    

Yes, you can write two loops, but this example shows how mapping one array
onto another is easier if you use zero-based array.

~~~
cousin_it
Good point. You could also say that simulating two-dimensional arrays with
one-dimensional arrays is easier with zero-based indexing: a[x + y * w] rather
than a[x + (y - 1) * w]. Are there similar use cases that look more natural
with one-based indexing?

------
nshepperd
Huh?

Mike Hoye: "C inherited its array semantics from B, which inherited them in
turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t
support pointer arithmetic, much less data structures."

Martin Richards: "If a BCPL variable represents a pointer, it points to one or
more consecutive words of memory. These words are the same size as BCPL
variables. Just as machine code allows address arithmetic so does BCPL, so if
p is a pointer p+1 is a pointer to the next word after the one p points to.
Naturally p+0 has the same value as p."

------
javert
> It’s hard for me to believe that the IEEE’s membership isn’t going off a
> demographic cliff these days as their membership ages

It isn't, because anybody who wants to publish in an IEEE conferences has to
pay a fee X as a member or Y as a non-member, where X + IEEE membership fee <
Y.

So young graduate students just join because our advisers tell us we must to
save money.

Of course, we get mail advertiesments from people (like insurers) who IEEE
sells our membership information to.

I think everybody in my demographic (young grad students) views this
organization as basically operating a racket. I bet they actually do lots of
really great stuff. But they've never told me anything about that. It's not
like there was an information session when I "joined" (paid up). So it's sad.
IEEE has a real image problem.

------
adamnemecek
That sure is a lot of words to say 'because indices are memory offsets'.

~~~
thepicard
Because it's more than that. It wasn't originally to speed up run time, but
compile time. That history is wildly fascinating to me. We tend to take
compile time for granted, unless we are bitching about Scala, or compiling C++
at Google.

The author also addresses the importance of history. Voodoo knowledge _does_
pervade modern computer science, and I for one am happy to see something
different.

~~~
stinos
_Because it 's more than that_

But is it really? I'm inclined towards anonymouz's and adamnemecek's idea
here: Dr Richard clearly says indices are offsets, no? It's nice the author
gets all sentimental about it, but that doesn't make him right.

------
mattdw
Related is Dijkstra explaining the pros and cons of different styles of
indexing and slicing:
[http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EW...](http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html)

~~~
jimhefferon
Right. The author discusses Dijkstra's argument, and dismisses it as not
rising to the level of wrong. Personally, I find Dijkstra's argument
unassailable. (It is, roughly, what a person would express in Python syntax as
range(0,k)+range(k,n)=range(0,n).)

~~~
ronaldx
I don't find Dijkstra's argument unassailable: let's claim instead that half-
open intervals are ugly (they are asymmetrical, and not typically used in
mathematics) and closed intervals are preferable.

Given that, his arguments support 1-indexing better.

~~~
mauricioc
The claim doesn't hold. Half-open intervals appear in important places. That
happens mainly because the family of finite unions of half-open intervals is
closed under complement (that is, they form a semi-ring; this is unrelated to
the "ring" concept in abstract algebra). In measure theory, one form of the
Carathéodory extension theorem [0] says that you can uniquely extend a measure
on a semi-ring to an actual measure (defined on a sigma-algebra).

The equivalent statement in probability theory says that a probability is
uniquely determined by its cumulative distribution function, which are
sometimes nicer to take limits of. You can also get a probability from a
cumulative distribution function, provided your function is right-continuous
[1] (which is directly related to half-open intervals). For more examples of
half-open intervals in probability, you can look at stochastic processes. See,
for example, càdlàgs [2] and Skorohod spaces; they capture the notion that
processes that "decide to jump" (think of Markov chains changing states if you
like) are right-continuous.

IMO, half-open intervals are just nicer whenever you have to union them or
intersect them, and are no worse in other aspects when compared to closed
intervals. Also, I think the author makes a big cultural blunder when he
dismisses "mathematical aesthetics" as a valid reason. A significant number of
mathematicians think of elegance as the ultimate goal in mathematics; as Hardy
famously said, "There is no permanent place in the world for ugly
mathematics".

[0]
[http://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_extension...](http://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_extension_theorem)

[1] Well, we also need the obvious conditions: Correct limits at infinity and
monotonicity.

[2]
[http://en.wikipedia.org/wiki/C%C3%A0dl%C3%A0g](http://en.wikipedia.org/wiki/C%C3%A0dl%C3%A0g)

~~~
ronaldx
> provided your function is left-continuous

Yes, but we are talking about indexing discrete arrays. All of your examples
are continuous, and are not analogous.

~~~
mauricioc
I was replying to your "not typically used in mathematics" quote.

As to the "and [thus] are not analogous" part, I recommend the Concrete
Mathematics book. You'll certainly see at least a hundred examples of
continuous examples alongside with their discrete counterparts, as you can
imagine by looking at the book title. Half-open intervals feature prominently
when doing discrete calculus in the Sums chapter.

------
kabdib
Say I want to index an array modulo N. I have to write

array[ M mod N + 1 ]

to get the right index. This is at best clumsy.

The reality is that, whether people realize it or not, we actually start
counting most things from zero. My bank account doesn't start at 1, nor does
any stock price, nor number of cattle or NSA agents hiding in bushes. Zero is
a perfectly reasonable speed to go (especially at a stop sign). None of these
are "at least one".

~~~
TheLoneWolfling
If you define your modulo operator as returning 1..N (inclusive), that's not
an issue.

~~~
kabdib
... whereupon you redo all your math. Wheee!

------
adam-a
Great article, the last part about understanding programming languages and
computers as user interfaces and human constructs is great. I love
understanding why things are as they are. Too many people seem to see
programming languages as the language the computer speaks, without realising
that they're actually at least 2 steps removed, with the precise intent of
making it easier for people to use a computer.

My favourite offshoot is this tragic LWN post about a fairly amazing sounding
debugging tracer that "three people in the entire world have ever run":
[http://lwn.net/1999/0121/a/mec.html](http://lwn.net/1999/0121/a/mec.html)

------
dahart

      float b[4], *bb;
      bb=b-1;
    

Voila! I have a single array that is both zero-indexed and one-indexed, in C.

This destroys the argument about pointer arithmetic. Both arrays use exactly
the same calculation to access.

Think this looks like a bad idea? That's _opinion_ and _convention_ , and
nothing more. This code snippet comes from "Numerical Recipes in C, 2nd Ed.",
in which they use one-based arrays in C frequently. In that edition, they also
advocated switching on the fly :O. Later editions switched to zero-indexed
arrays due to dominant convention.

"It is sometimes convenient to use zero-offset vectors, and sometimes
convenient to use unit-offset vectors in algorithms. The choice should be
whichever is most natural to the _problem_ at hand." [NR, 2nd ed. pg 18.
emphasis mine]

Context is the key to knowing which one is better, and neither one is best in
all contexts. Saying zero indexed arrays are always better is like saying
scrambled eggs are always better, it doesn't make sense without context, and
its imposing an opinion that others may not share.

~~~
cliffbean
Read their explanation though:

"For example, the coefficients of a polynomial a0 + a1 x + a2 x2 + . . . + an
xn clearly cry out for the zero-offset a[0..n], while a vector of N data
points x i , i = 1 . . . N calls for a unit-offset x[1..N]"

The polynomial cries out for zero for a real mathematical reason. Zero really
is more natural there. The vector of N data points "calls" for a unit-offset
because, um, convention?

~~~
dahart
Yes, right! Again, "context is the key to knowing which one is better."

We are violently agreeing, we don't really need to twist words or attempt to
disprove what might just be a poorly worded statement, right? We are all
educated engineers here, and we can afford each other the benefit of the
doubt, no? We both get it.

The important part of the point is that all the other arguments here about
which choice is _inherently_ more "natural" is somewhat bogus. Neither is
inherently more natural, what is more natural depends on what you're trying to
do.

In some cases, there isn't an obvious choice, that's where convention plays
the biggest role, but having a default choice made for you doesn't make the
convention better or more natural, it just makes it an arbitrary choice that
everyone is used to.

~~~
cliffbean
I agree that we're having an educated and civil discussion.

And I agree with the sentiment of using the right tool for the job, in
principle.

However, I disagree that we're in full agreement :). I actually do think that
there is a choice here that is inherently more natural for the sufficient
majority of general-purpose tasks. The main arguments I'm aware of against it
are arguments purely from human convention, which I find are not always as
"natural" as one might hope.

~~~
dahart
So, is the implication that you feel zero-based is the inherently more
natural? Or one-based? Truly just curious, and I'm not about to argue over it.
:)

Personally, even though I'm not used to it, I could probably be convinced that
one-based might be the right "general purpose" answer for looping through an
array when not doing any math on the index. Often when doing any math on the
index, zero-based is much better. I do a _lot_ of index based math, and yet if
I'm being honest, I'd say the vast majority of my loops and the loops in code
around me probably don't play index tricks and could use either indexing
scheme.

Funny enough, I do a lot of graphics and image processing, and the OP
referenced a joke about splitting the difference and using 0.5 -- I used 0.5
indexed arrays all the time! Logically, not syntactically -- I mean I take
pixel samples starting from the center of a pixel, which is indexed by
(ix+0.5,iy+0.5). If there were such a thing as a 0.5 index, I would use it. :P

~~~
cliffbean
Oh, I thought I was clearer than I was. I feel zero-based is more natural.

However, I do agree with you that in the vast majority of loops there aren't
any interesting index tricks. For this reason, I actually think language
constructs like "foreach" and iterators and so on are better than counting
from either 0 _or_ 1 in most cases. In high-level languages, use high-level
looping constructs, and let the compiler figure out the indices for you :-).

------
MichaelMoser123
Lambda the ultimate had an interesting discussion on this article, recently
[http://lambda-the-ultimate.org/node/4843#comment](http://lambda-the-
ultimate.org/node/4843#comment)

------
downer90
To sum it up, after 20 paragraphs:

    
    
      So: the technical reason we started counting arrays at 
      zero is that in the mid-1960′s, you could shave a few 
      cycles off of a program’s compilation time on an IBM 7094. 
      The social reason is that we had to save every cycle we 
      could, because if the job didn’t finish fast it might not
      finish at all and you never know when you’re getting 
      bumped off the hardware because the President of IBM just 
      called and fuck your thesis, it’s yacht-racing time.

~~~
roel_v
Except that's not even what the guy who emailed it said. He said nothing about
compile time - he said basically 'computers work with offsets, and the first
element is at offset 0'. But that would have been against the preconceived
notion of the author that there is something special about starting at 0, so
he needed to make something up to make his 'article' fit.

(not harping on you, just stating for the TL/DR's looking for a summary)

------
lukasm
If people would use word "offset" there would a lot less confusion.

------
RyanMcGreal
TL;DR:

"So: the technical reason we started counting arrays at zero is that in the
mid-1960′s, you could shave a few cycles off of a program’s compilation time
on an IBM 7094. The social reason is that we had to save every cycle we could,
because if the job didn’t finish fast it might not finish at all and you never
know when you’re getting bumped off the hardware because the President of IBM
just called and fuck your thesis, it’s yacht-racing time."

~~~
k__
really?

I thought the reason is that the index is simply a memory offset. :\

a[0] always meant something like "get me the data whre a is pointing to"

a[1] meant "get me the data 1-offset next to where a is pointing to"

~~~
VLM
Traditionally this is taught to noobs on a blackboard or I suppose powerpoint
now.

One based indexes are just going to confuse the noobs even worse. "So a[2]
means get me the data offset by one from a" "uh what, don't you mean offset by
two?" "No somebody said pointer arithmetic would be simpler if you had to
memorize a bunch of different times to add or subtract one, just as a barrier
to entry". "Oh well that sucks. Someone should invent a zero indexed array
language..."

~~~
k__
Hum, funny...

I always thought about arrays as pointers (at least in C), like it's just
syntactical sugar for specific kind of pointer.

------
moron4hire
It also makes modulo math easier. Starting at the first element in an array
and jumping to every 8th element thereafter gives you indices of 1, 9, 17, 25,
etc for 1-indexed arrays. It's not immediately, intuitively, obviously clear
that the index is 8n+1. However, 0-index arrays would be 0, 8, 16, 24, which
is very clearly 8n.

There is no argument for 1-indexed arrays other than, "so-and-so can't be
arsed to learn 0-indexed."

~~~
pubby
This makes indexing 2d arrays as 1d arrays a lot easier. With 0-based, index =
x + y * width, while 1-based, index = ((x-1) + (y-1) * width) + 1

------
presidentender
I 'got' zero-based array indexing when I tried making a Tetris clone. If you
try to sum Cartesian coordinates stored in an array whose index is 1-based,
you have to do subtraction to reset the off-by-one error.

I can't describe the theory in any more detail than that; I don't even know
what I'd have to study to describe the issue using technical vocabulary. But
it just clicked for me.

------
praptak
I think a lot of people missed the point of the article. Yeah, it's basically
memory offsets/pointer arithmetic. But the difference here is "because indices
are memory offsets and it just makes sense" and "because indices are memory
offsets and I have the research to back it up".

The point of the article is the impressive research.

~~~
roel_v
Emailing two retired people and then slapping together 20 paragraphs full of
hub-bub about nothing is not 'impressive research'. It's not even 'research'.

~~~
davidgerard
It's more than anyone else had bothered doing.

------
coolsunglasses
This mischaracterizes what happens by leaning on an unreliable source.

------
jheriko
not sure exactly why the idea that it saves compile time is supposed to be
surprising.

its obvious that you can make 'human friendly' (i don't agree with that at
all) one based indexing into zero based indices at the compiler stage by doing
a lot of -1's (because it makes sense for memory addressing - thats how memory
addresses work). those -1s obviously cost something...

that it slows down run-time code is utterly unthinkable in the context of old
school computing - whilst it might make sense today if you are using some high
level tool built on piles and piles of other tools, and so far from the metal
that you have no idea what is going on... even then i struggle to understand
why this would be considered.

off loading run-time costs to compile time is standard optimisation
practice... although it was massively more important in the past than it is
now. the case from the story shows how language design optimised the compiler
design, so that it would optimise the final run-time.

its just optimisation - any programmer assigned this task in this context is
highly likely to come up with the same solution because its straightforward
and works.

------
linker3000
Isn't this the 1th time this has been posted recently?

------
dvanduzer
I read this and thought "haven't you even _read_ Guido's post on this?" and
then retrieved the hyperlink to much chagrin: It turns out this is the
original post Guido was responding to.

[https://plus.google.com/115212051037621986145/posts/YTUxbXYZ...](https://plus.google.com/115212051037621986145/posts/YTUxbXYZyfi)

------
kannanvijayan
The origins of our current commonly accepted indexing semantics are one thing.
The question of which scheme is better is another thing entirely, and it's
been settled quite definitely.

There are myriad reasons why zero-based indexing is the correct approach.
Djikstra's commentary on the subject gets to the heart of the issue, but let's
take the roundabout route.

Consider languages with pointers and pointer arithmetic. In these languages,
anything outside of zero-based indexing is confusing and error-prone. The
equivalence:

    
    
      p[i] is *(p + i)
    

Only holds with zero based indexing, and it is a crucial property to maintain.
The crucial understanding of |i| in this context is that it's not simply a
number, but that it is a _vector_ from one location to another. The nullary
vector (0) |p| takes you to |p|, or |p == &p[0]|. To get the vector between
two pointers, we simply have to subtract the origin from the destination:

    
    
      v(p, q) = q - p
    

So:

    
    
      v(p, &p[i]) = &p[i] - p = (p + i) - p == i
    

These vectors fall into a natural algebraic relationship. Consider two indexes
|i| and |j|, and the vectors between |p|, |&p[i]| and |&p[j]|. To help with
readability, I'll alias |pi = &p[i]| and |pj = &p[j]|.

The vector from |p| to |pi| is |i|. The vector from |p| to |pj| is |j|. The
vector from |pi| to |pj| is:

    
    
      v(pi, pj) = pj - pi = (p + j) - (p + i) = j - i
    

Plug this vector back in as an index from |pi|, and we get back to |pj|
naturally:

    
    
      pi[v(pi, pj)] = (pi + v(pi, pj)) = (p + i) + (j - i) = p + j = pj
    
    

With one-based indexing, this algrebraic consistency breaks down. I'll use
caps to distinguish the one-based indexing formulas from the zero-based
indexing formulas. In the one-based scheme, |&P[I] == (P + (I - 1))|.
Following from that:

    
    
      V(PI, PJ) = PJ - PI = (P + (J - 1)) - (P + (I - 1)) = (J - 1) - (I - 1) = J - I
    

So far so good. The vector calculation yields the same number! Great! Now
let's plug it back in:

    
    
      PI[V(PI, PJ)] = (PI + (V(PI, PJ) - 1)) = (P + (I - 1)) + ((J - I) - 1) = P + J - 2 = PJ - 1
    

Nope! Indexing from |PI| with the vector from |PI| to |PJ| no longer takes you
to |PJ|. Rather, it takes you to one element _before_ |PJ|.

The consistent, clear relationship between pointers and the vectors between
them no longer holds. Vectors no longer compose. Composing two vectors no
longer works, as the |-1| term compounds over each composition.

This is no small thing to lose. Algebras are extremely powerful ways of
coherently describing the behaviour of systems. Losing these algebras means
you lose their powerful regularity. Adding two vectors no longer yields the
true composite vector. Multiplying a vector by a scalar no longer yields a
scaled vector. At every vector addition or scaling, the programmer will be
forced to adjust their result vectors to obtain the true composite vector.
This is an enormous burden on the programmer.

Any programming language that imposes this burden on the programmer is simply
wrong. We make languages so that it's easier for us, people, to reason about
the behaviour of our programs. Destroying the algebraic relationship between
pointers and the vectors between them is inexcusable. We may have experimented
it during the early years of language development, but we know better now.

The above is just _one_ reason why zero-based indexing is, without a doubt,
the correct approach. The other major reason has to do with the superiority of
open intervals over that of closed intervals, and of the algebras over those
intervals. I might write another comment on that if I get time, but the gist
of it is the following (this is the thrust of Djikstra's argument in favour of
zero-based indexing):

Zero-based indexing supports a notion of integer intervals that fall into a
coherent algebraic framework of open intervals.

Adding to Dijkstra's argument, this notion of an algebra over open intervals
is powerful because not only dose it capture integer intervals (array and
pointer arithmetic in C and C++), but also can be extended to ranges over a
variety of ordered containers (e.g. collections of arbitrary values), and of
ordered infinite sets (e.g. the abstract set of all strings, or all JSON
values, etc.). The most prominent implementation of this is C++'s STL ranges
(there's a very good reason |container.end()| in C++ returns an iterator
"past" the last item in the collection). Suffice it to say that for algebras
over intervals, open intervals win over closed intervals.

However, I don't have time to write that novel right now ;)

------
Avshalom
Mean while Pascal side steps this* entire aesthetic debate by letting you
define the index range of an array as you see fit and did so years before UNIX
convinced everyone that C was somehow a good idea.

*'this' referring to the debate in thread, not in the article.

~~~
lmm
Which is a terrible idea. It's like sidestepping the debate over whether to
use spaces or tabs by using both.

------
jkcxn
Imagine if we used different symbols for counting and indexing. I think Roman
numerals would be good to use for indexing.

    
    
      arr[I + 0] = arr[I]
      arr[II + 3] = arr[V]
      arr[I - 10] = undefined

~~~
mtdewcmu
That's funny. Someone should implement Roman numerals in a real language.

------
Hellenion
And why would it be so bad to have a programmer's mythology? Sure it isn't
factual history but it lets us explain the world we see around us in a
memorable and understandable way.

------
crististm
No mention of Djikstra's argument for counting from zero (EWD831)?

------
phryk
You know how on a few occasions you stumble over a post that seems benign but
at the end you go: "Wow. Much insight. Very inspiration."?

This has been one of these occasions.

