Hacker News new | past | comments | ask | show | jobs | submit login
Edsger Dijkstra's note on starting array indices at 0 (pdf, 1982)
160 points by bdhe on June 21, 2011 | hide | past | favorite | 76 comments



Seems like I'm in the minority here, but just yesterday I found an off-by-one bug in my small project for my dissertation, that was sitting there unnoticed for months and I started thinking about how ridiculously stupid is the fact that zero-based indexing is in widespread use. Here I see someone arguing how it eases up cycling through the array, "a frequent operation". How is looping through an array or accessing its last element for a frequent operation?

In every non-computer related document one will found ordinal numbers starting from 1 and n-element sets with the number of the last element being n. Now thanks to the zero-based indexing one has to always switch back and forth between two ways of counting things, depending if one writes/reads code or prose. Just pay attention over the next week or month how many times you forget to make the switch and make this kind of off-by-one mistake I'm talking about. I admire mathematics more then any other discipline and I do see the benefits of a formal approach, but here the formalism has in my opinion clearly diverged from any kind of common sense and from actual software engineering practice.

In fact in any engineering discipline, if you study its history, communication errors are the most frequent reason for failure or even for disaster. Having to always translate every document to the private language of your discipline certainly doesn't help communication. Dijkstra mostly did proofs and didn't have to deal with other people too much in his work, so perhaps he didn't understand that.


It isn't just computers. Mathematicians start from zero or one depending on the context. Hence the famous joke about the absent-minded mathematician whose wife left him on a street corner with a dozen packages and implored him to please, please, please pay attention and make sure they weren't stolen. When his wife returned, he apologized profusely. "I'm so sorry, my dear. My mind must have wandered. I didn't see anybody come near, but now one of the packages is missing."

"Are you sure?" said his wife. "It looks like they are still all here."

"There are only eleven left. I've counted them a dozen times at least. I'll count them again. See: zero, one, two...."

P.S. The most famous example I can think of is the aleph notation for the infinite cardinals, which starts with aleph null: http://en.wikipedia.org/wiki/Aleph


I doubt mathematicians commonly use zero as an index of the first element of a collection. Matrices and vectors which are closest to arrays are indexed from one. In the aleph notation aleph null is different from all the other alephs because it represents the cardinality of an infinite, but _countable_ set, so the notation makes sense and it doesn't have any disadvantages.


\aleph_0 is the first countable infinity because the alephs (like the beth numbers, and the levels of the cumulative hierarchy) are indexed by ordinals, which start from zero, i.e. the empty set in the usual constructions of von Neumann and Zermelo.

However, whether we start from 0 or 1 is ultimately arbitrary and more a question of symbol choice than semantics, since the sequence of natural numbers beginning with 1 is isomorphic to the sequence of natural numbers beginning with 0. Set-theoretically it makes a little more sense to start with 0, but when considering arithmetic alone, it really makes no difference.


There are many other examples. Topological spaces can be described as T0, T1, T2, etc. to indicate the separation axioms that hold on them. In part this reflects the tendency for there to be a first case that is degenerate or that common sense overlooks; for example, lines are one-dimensional and planes are two-dimensional, but mathematicians start with points, which are zero-dimensional.

It feels like this is related to the question of whether the "natural numbers" includes zero or not. In the set theory classes I took, we used N (in "blackboard bold") for the natural numbers 0, 1, 2, 3, ... and Z+ (in "blackboard bold") for the positive integers. My professor's pragmatic take on the controversy was that Z+ already means the positive integers, so it seemed like a waste to let N mean the same thing as Z+ and have no symbol to denote the nonnegative integers.


I always liked my algebra professor's way of getting to the same conclusion of \mathbb{N} including 0:

The natural numbers are how we count things. How many moons does Earth have? One. How many eggs are in the carton in my fridge? Twelve. How many elephants are in this room? Zero.

Incidentally, all of my professors agreed with this and most of them used 0-indexing unless some insight or ease of work was exposed by a different indexing scheme.


When I started grad. school in math, I hung out with a bunch of set theorists & topologists. And they all used \mathbb{N} to mean {1, 2, 3, ...}. Their reasoning was that we already had a symbol for {0, 1, 2, 3, ...}, namely, \omega, the first infinite ordinal. However, my experience in the years since, suggests that your convention is the more common one.

BTW, there is an important point made in your comment, one I had to hammer on repeatedly when I taught discrete math: that the answer to a "How many ... ?" question is a nonnegative integer, i.e., an element of {0, 1, 2, 3, ...}. I've found that this fact is surprisingly difficult for many people to grasp.


Series, sequences, polynomial coefficient are indexed starting with zero. All of them are rather often used and important.


There are different conventions on this. Some textbooks have sequences starting from one, some from zero.

There's an advantage to starting from zero if you want to work with Taylor expansions.


This doesn't make sense. If you count out twelve packages: zero, one, two, three, four, five, six, seven, eight, nine, ten, eleven; you have used twelve number names, hence: twelve packages.


Yes, obviously he made a mistake :-) and the mistake he made was starting from zero but assuming that the last number he said out loud ("eleven!") was the number of objects he had counted, which is only correct if you start from one.


I think Luyt's point, with which I reluctantly agree even though it spoils a nice joke, is that a hypothetical mathematician in whom the habit of counting from 0 is so firmly ingrained would also have a firmly ingrained habit of taking (last number named + 1) as the number of things.

So yeah, sure, he could make a mistake, but the premise of the joke is that such a mistake is especially likely for a mathematician counting from 0, and I don't think it is.


No, the point of the joke is that the mathematician uses natural numbers (0,..) and everyone else uses ordinal numbers (1rst, 2nd, ...) so when his wife asked for 12 he thought she meant 0-12. What she actually meant was 1-12. So it was a miscommunication caused both by his habit of using natural numbers for everything and being very isolated from the outside world.


I'm reluctant to continue dissecting what after all is only a joke, but:

1. The mathematician's wife was not using ordinal numbers; nor was he. They were both asking "how many of these things are there?". Cardinals, not ordinals.

2. A mathematician who's so used to 0-based working that he always counts from 0 simply will not interpret "0,1,...,11" as meaning that there are 11 things. The same unusual way of looking at things that makes him start from 0 will also make him proceed from "...,10,11" to "there are 12 things".

3. The mathematician's wife didn't "ask for 12". She left him with 12 packages. What happens in the joke is not a miscommunication, it's s slip-up by the mathematician in going from "...,10,11" to "12 things". Which I suggest is highly implausible for a mathematician in whom 0-based counting is deeply ingrained.

And now I've had enough of overanalysing this joke and shall leave it alone. Feel free to have the last word if you wish.

Incidentally, (some) mathematicians prefer their ordinals to start from 0 as well. The trouble is that "first" (derived from "foremost") really ought to mean the one you start with rather than the one numbered 1, and "second" (from the Latin "secundus" meaning "following") really ought to mean the one after that, and "third" etc. are derived from the number names and therefore have to correspond to 3,4,... -- so there's a gap at 2. One of my acquaintances therefore likes to use a sequence of ordinals that goes: first (0th), second (1nd), twifth (2th), third (3rd), fourth (4th), fifth (5th), etc. I don't think it'll catch on.


>How is looping through an array or accessing its last element for a frequent operation?

In languages without foreach or map I'd guess it's one of the most frequent operations. You probably work with the wrong kind of code to appreciate just how awful anything but the [0..n) behaviour is for... I'm sorry, it's so pervasive I can't even come up with a good example. The most trivial thing is offseting an array by an index by addition.


It's doubtful that, had you been working with a one-based indexing language, there wouldn't be other types of errors in your code because of it. Zero is such a pervasive number in software development that, like it or not, you're bound to use it in lots of places.

Even if we assumed that one-based indexing was a good thing, many languages -VB prominently- can't seem to make their mind and use a single convention through: arrays may be one-based, but collections are 0-based, and when you start having to interface with libraries written in other languages or APIs, you're left walking on eggs, always wondering if you're not off by one somewhere.


I said this yesterday, but the way an array works is you get a pointer to its first item + (size of an item * index) to get a pointer to the item you want. The first item is the pointer + 0. So that's why it's like that.

FWIW I grew up on FORTRAN which starts from 1 and work with PL/SQL now which also starts from 1, so I am used to it, but I understand why it makes sense to do it from 0 in terms of the way computers actually work. At least, computers with operating systems written in C.


In a quite similar example of a bad convention, I find it extremely frustrating that many programming languages do not have a modulus operator (%) which behaves as modulo in mathematics. To be explicit, there are multiple choices of the sign of:

  x % y
When either x or y is negative, or both (the full details are described at Wikipedia: http://en.wikipedia.org/wiki/Modulo_operation). In common mathematical notation x mod y, the result always shares the sign of the divisor y. For example,

–14 mod 10 = 6

6 mod –10 = –4

The same is the case in python:

  >>> -14 % 10
  6
  >>> 6 % -10
  -4
Unfortunately, in JavaScript (and various other languages), a different convention is chosen. The result instead shares the sign of the dividend:

  > -14 % 10
  -4
  > 6 % -10
  6
This is really annoying because we quite frequently want to use the modulus operator to do things like normalize an angle to the range [0, 360°). To perform this normalization on an angle a in JavaScript, if a < 0, requires a computation like:

  a_prime = a % 360 + 360
I end up usually defining a function like:

  var mod = function (x, y) {
    var js_mod;
    return (js_mod = x % y) && (x > 0 ^ y > 0) ? js_mod + y : js_mod;
  }


The worst part is, if you are a mathematician and you expect the modulus operator to behave exactly as it does in mathematics then it can be maddeningly difficult to track down why your program isn't working correctly. You just don't expect simple mathematical operators to start malfunctioning.


Right.

A convenient property to have that is held by the Python version is (for nonzero n)

        x==n*floor(x/n)+x%n
This property isn't held by the Javascript variant, where there is a somewhat more complex property

        x==n*sign(x/n)*floor(abs(x/n))+x%n


In python, math.floor(x / n) can be written x // n, so:

  x == n * (x // n) + x % n


math.floor(x / n) can be written x // n

It's worth noting that x//n returns an int and math.floor returns a float, so the statements aren't equivalent.


If you pass a float in as x or n, x // n will be a float.


Agreed. A closely related infelicity is that integer division in most languages (and indeed, on most CPUs that have an integer divide instruction) truncates toward zero. So 1 / 2 and -1 / 2 both evaluate to zero.

The `floor' operation -- that truncates toward negative infinity -- is a more useful primitive, and goes hand in hand with the modulo operation defined as you describe.


Common Lisp, of course, gets modulus right:

(mod -1 5) => 4 (mod 13 4) => 1 (mod -13 4) => 3 (mod 13 -4) => -3


48 hours writing a game in Lua (which indexes from 1) drives this home: in tile-based games, converting screen co-ordinates into tile co-ordinates results in zero-based indices. My graphics routines were full of +1s and -1s as a result.

Generally any time array indexing intersects with mathematics you are going to get the same issue.

That combined with the fact that, in Lua, sometable[0] doesn't produce a not-present exception (it returns 'nil') resulted in a lot of unnecessary cursing.


You can use a metatable to change this behaviour, either by reindexing, or by returning an error on lookups to values not in the range 1 to #table.


The comment about having special notation for different interval types in Mesa caught my eye right away. I constantly make mistakes with "to" and "until" in Scala, where (1 to 9) is the same as (1 until 10). They feel the same, and I've been having a hard time training my brain to treat them differently. I've assumed that using () for exclusive interval bounds and [] for inclusive interval bounds, as in mathematics, would be easier to read. (), [), (], and [] are immediately recognizable visual patterns in mathematical expressions; they're hard to confuse. I often wish I could use them when writing code.

However, according to section 3.1.2.1 of the Mesa language manual [1], that was the notation used by Mesa, and according to Dijkstra, it didn't work. It caused errors and confusion. I can guess why -- in mathematics, the braces are made large enough to be visually prominent around the contents. In a line of code all rendered in the same size font, they would be easy to lose and would not hit your subconscious mind the way they do in a mathematical expression.

Someday, with better code rendering in editors, maybe we can just use the appropriate braces to indicate whether interval bounds are inclusive or exclusive.

[1] http://research.microsoft.com/en-us/um/people/blampson/23a-M...


Zero-based indices also behave nicely with the modulus operation. That is, if I have an N-sized array that is indexed from 0 to N-1, then I can take any arbitrary positive integer i and get a valid index through the operation i % N.

Of course, if it was indexed from 1 to N, then (i % N) + 1 would do the trick, but that's more cumbersome and uglier for what can be a frequent operation.


It would have to be ((i - 1) % N) + 1


Why? If all I want is a mapping from i to my index space, I don't see why we should have to shift i down one first. Note that we don't have to come up with the same mapping that the zero-based indexing would.


Because then you'd get to the next row before you've finished N elements.


Yeah, I don't see what you're getting at. Where does "before" come in? We're assuming an arbitrary positive integer, not a sequence. And even if it was a sequence, I see no problems. Let's assume N == 5, then the mappings we get are:

   1 -> 2
   2 -> 3
   3 -> 4
   4 -> 5
   5 -> 1
   6 -> 2
   7 -> 3
   8 -> 4
   9 -> 5
  10 -> 1
  ...
This is a valid mapping from Z+ to our index space. Is your objection that 2 does not map to 2, 3 does not map to 3? That may be a useful property, but (i % N) + 1 is still a valid mapping without that property.


As always, the last word on this topic:

"Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration." -- Stan Kelly-Bootle


One student group at the University of Aalborg designed a language which, among other things, allowed array indexes to be floating point numbers.

So yeah, the array would start at 0,5.


I bet it wasn't an array at all, but a hashtable.


It was, but they did convert it to an integer on the implementation sides so 0.2 and 0.3535 would be the same element.


I'll take this opportunity to bash the Unix "tail" utility. "tail -c +n" is supposed to mean "print all characters (or bytes, rather), starting at index n". Unfortunately, the implementer used 1-origin indexing, which is bad enough when I expect 0-origin. But there's also a clear alternative semantics that doesn't even use the concept of indexing: "tail -c +n" should mean "exclude the first n bytes". Both interpretations lead me to expect the following command to output "56789", but it doesn't, and I have been burned twice already by off-by-one errors due to this:

  $ echo '0123456789' | tail -c +5
  456789


I am sympathetic to your desires, but the man page clearly states the implemented behavior:

    $ tail --version | head -n 1
    tail (GNU coreutils) 8.5
    $ man tail | grep -A 2 bytes=K
       -c, --bytes=K
              output the last K bytes; alternatively,  use  -c  +K  to  output
              bytes starting with the Kth of each file
The fifth byte of '0123456789' is the byte corresponding to '4', and the manpage says it will output bytes starting with the 5th (ie 456789)


Maybe this is some of my love for C shining through, but I've always thought of an array index as denoting the distance from the array pointer to the current/desired position. Thus in an array of integers for example, array[0] represents what's in memory at the pointer itself. array[1] is then the value in memory at sizeof(int) * 1 away. array[2] is sizeof(int) * 2 away, and so on.

A good analogy to this way of thinking is a person's age. When a person is born, or in other words when a person is at the beginning of the array of years that he/she exists, the age index is 0. At the person's first birthday, the age index is 1. Then 2, 3, and so on.

I guess the tl;dr then is that I agree with Dijkstra that array indices should begin at zero.


I think zero-indexing is appropriate for thinking in terms boundaries and distances from the start or edge of something. C indexes arrays from zero because you have gone zero distance from the boundary of the contiguous memory chunk when you access the first element.

HOWEVER, when you are doing math, you more often think in terms of sets of cardinality N, with N=0 being an empty set. Also, the last 400 years of math has been written with one-indexing, so when you translate matrix formulas or summation notation into numpy, you have to dork around with -1 all the time, and it sucks. Matlab and R are both 1-indexed and it is the only thing that really makes sense when you are working with them.

I personally hate one-indexing, but 98% of what I do is data analysis and mathematical programming. If I was doing low-level (-ish) memory addressing, it would be a different story.

I thought Perl and Tcl and Python and Ruby just adopted zero-indexing by default from C, but now I see someone smart advocated for it explicitly (too bad he is wrong, at least part of the time)


I vehemently agree with Dijkstra on this matter: indices should start at 0 and run up to N (but not including N itself).

Any language which does not facilitate this, should be cast from The Bridge of Death into the Gorge of Eternal Peril.

http://www.youtube.com/watch?v=cV0tCphFMr8


I like viewing the handwritten note, but for those who want something cut/pastable:

http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EW...


Somebody made a TrueType font which resembles Edsger's handwriting, see an example at http://fonts.goldenweb.it/pan_file/l/en/font2/Dijkstra.ttf/d...

Direct download of TrueType font at http://fonts.goldenweb.it/download2.php?d2=Freeware_fonts...


I find it much more natural to represent "{1,2,3,4,5}" as "1..5" or "range(1,5)", not "1..6" or "range(1,6)".


Dijkstra's point is that you should start counting at 0, so if you want to generate the first five ordinals, you would write [0:5) which yields {0,1,2,3,4} as expected.


In formal methods its common to have a special notation 0 -> N which is 0 .. N-1. That way, a sequence of length M+N consists of a sequence with indices 0 -> M and a sequence with indices M -> M+N.


I have come to the conclusion that 0 based indexing is basically building "off by one errors" as a language feature.

The fact that for(int i = 0; i < length; i++) is nicer than for(int i = 1; i <= length; i++) is irrelevant. The c style for loop is designed to make the complex case easier while making the common case harder. It also makes some compiler optimisations impossible

A better for loop would be something like for(int i = start to end)

However with 0 based indexing you end up with e.g in delphi:

for i := 0 to count-1 ... which is an open invitation to error.

In other words, the for loop construct should not be used as an argument for or against 0 based indexing, it should be derived from the indexed used.


C starts at 0 because of pointer arithmetic.

And the proper for loop is "for x in n".



Humans don't start counting at zero, though―we start counting at one. Zero, in fact, was a concept introduced to our arithmetic much later.

A list of twenty items has a twentieth item.

Dijkstra is far too mathematical with this―no student learning programming for the first time (edited: changed from programmer) in the world ever thought "arrays starting at 0―how natural!".


But indexing is not counting. It bears some similarity to measuring, though. If you look at a ruler, does it begin with a "1"? Or does it have a "1" one unit from the beginning?


Indexing is only "not counting" when you're working with a base address and an offset. In most languages, that's only an "under the hood" reality; the programmer is neither interested in, nor has direct access to, the bitly goodness underlying the array (or what have you). It's just a bunch of slots, and when you want to put something into (or take something out of) the first slot, then the natural ordinal is "first", not "zeroth". We're just used to a different way of doing things; it doesn't make any more sense to anyone, except by convention.


It IS counting when it measures discrete elements, and it is measuring when it measures distance from a boundary. Discrete sets and continuous sets are very different beasts, and some of the mix-up here seems to be from ignoring their differences.


You might change it again to "no programmer of language without direct access to memory pointers". Then again, even that would be hard to prove. I definitely didn't see anything wrong with starting at 0 and it made perfect sense for C arrays.


> arrays starting at 0―how natural!

Better than starting at pi; that would be irrational.


Think about that:

A clock starts at zero. A day has 24 hours but there is not a twenty-fourth hour. Midnight is 0:00.


Clocks are a relatively recent invention.


In fact Clocks are one of the oldest inventions of the human mind. 4000 years ago in Babylon water clocks (clepsydra) were used. Also in old China. http://www.nist.gov/pml/general/time/images/water_1.gif There is a 2050 year old Horologia tower in Athen ("Tower of the Winds"):

"This structure tracked a 24-hour day using both sundials and mechanical hour indicators"


Humans have been around for more than 200,000 years (up to 2.5 million years, depending on how you define the term "human"). So even 4,000 years ago is relatively recent on the scale of human history.


I'd disagree with your last statement. I enjoy the mathematically simplicity of the address into your array being pointer+size(index) as opposed to pointer+size(index-1). Now I will agree that it's probably because of too much math, but it just makes sense that if I want to drop out a term I can multiply it by zero. It's like finding roots of polynomials.


I actually meant programming student, as in, student learning programming for the first time―naturally, once we reach a certain level of proficiency, more of the esoteric concepts seem intuitive to us; I've edited my post accordingly.

You're right that it is convenient that the address of an item is first index + current index * index size in memory, but things like that comes at a cost of making indeces completely unintuitive.


You're (probably) right, and no student learning programming for the first time thought that starting array indexes at zero was a natural idea.

That is because they are learning it for the first time.

Another way of putting that might be that they don't know a single thing about the subject ;)


Nice handwriting!


Fun fact about Dijkstra's handwriting: one winter when he was ice skating (which is a popular Dutch pastime when it freezes) he fell and broke his right hand. You'd think that would stop him from writing his manuscripts manually for a while, but no. Instead, he taught himself to write left-handed, like a boss.


That was my thought as well.


With all due respect, Dijkstra was a great scientist, but not a great software engineer.

As an example, think of Dijkstra's obsession of using logical proofs of correctness (and not tests) as a way to assure software quality. It is also strictly correct but simply impracticable.

Sometimes an incorrect but intuitive abstraction is better than a correct but harder to grasp, even if just a little harder. These are cases like that.


In Dijkstra's heyday --- up to about 1970 --- logical proofs of correctness were a great deal more practicable than tests as a way to assure software quality. There were several reasons for this:

1. You weren't writing the software in a portable high-level language, but in assembly or machine code. Consequently a new machine was running new software, or simulating an old machine (very slowly). Waiting until the new machine was ready to begin writing the software (assemblers, simulators, compilers, OS) would have enormously delayed delivering the machine to customers.

2. Computers, and thus tests, ran a lot slower. Logical proofs of correctness ran at almost the same speed. Consider the System/360 Model 30: http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainfra... --- 34500 32-bit-word additions per second, compared to 10 billion or so on the machine sitting on your desk, not counting the GPU or possible SIMD instructions. A test that would take your machine a millisecond to run would take the Model 30 about 5 minutes. This also meant that the cost of a bug that necessitated re-running a big compute job was substantial, even if it was caught and didn't produce any worse results than a resubmission.

(There are, of course, proofs of correctness that we can do today with computer help that were not practical then; but those were mostly not what Dijkstra was considering.)

3. Computers mostly weren't interactive. Timesharing was being born, but it hadn't taken off yet, and there were only a small number of computers dedicated to the usage of a single person, or on which you could reserve single-person time. This means that the speed problem is actually much more severe than it appears --- your 1-millisecond test might take 5 minutes to run, but you have to put it in the job queue, and you'll get the result probably within 24 to 48 hours --- and also that the most valued properties of your program are things like "correctness" and not "usability" or "fun". (Usability was actually already important in avoiding user errors and associated incorrect results and wasted computer time, but most people didn't realize this yet.)

4. Computers and support software were much simpler. (And, to hear the old-timers tell it, much more likely to behave as specified.)

Despite all this, I think that some of Dijkstra's emphasis on structuring software so that its correctness is easy to verify is part of the mainstream today, and it's essential in making the software testable. Take a look at The Elements of Programming Style and compare the programs they quoted from textbooks to almost any modern code.


After all the amount of money lost to off-by-one errors (millions? billions? who knows) I have to completely disagree with this. It's workable in C with its pointer arithmetic and other low level bit twiddling but this has no business in a higher level language. I realized how much unnecessary pain I had been suffering the first time I saw the for(;;) loop equivalent in Smalltalk:

1 to: 5 do: [ " ... " ]


IBM LotusScript allows you to set each array lower bound to any integer. You can also configure a whole module to have a default lower bound of either 0 or 1. http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/index....


Yes! One of my favourite ewd's.

(The ones about proof strategies are good too, and those where he laments the quality of the Dutch higher educational system, though I imagine these are less interesting to computer scientists outside the Netherlands. It never seizes to amaze me how much of his writings are still relevant decades later.)


If I'm ever in the position of owning a building containing a lift/elevator, I'm going to ensure that the bottom floor is labelled '0' rather than 'G'.

Unfortunately I didn't have the presence of mind to create my daughter a zeroth birthday cake until a few days after the event.


I've seen that quite often in Europe (y'all are just so darn rational over there).

See http://en.wikipedia.org/wiki/Storey#European_scheme_2


Until reading that link, I had no idea that North Americans numbered the ground floor as 1!


It is already that way in my office building (Azrieli/Tel Aviv)


Many CFD and FEA techniques use formulas that exclude the first and/or last element. When compounded with the 0/1 issue of the original post, it explains why my CFD class was such a living hell. Off by one and two errors all over the place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: