This article grinds my gears. He quotes a number of mathematicians that correctly state the undefined nature of 1/0. Then proceeds to interpret that this means that we can choose any specific value to represent as 1/0 that we want. NO. We have "NaN" for a reason and it is an important signal to the programmer that a mistake was made. The language that assigns it to 0 silently is bunk as is this article.
Erm, when we say it's "undefined" we mean it literally -- standard mathematical systems of arithmetic do not define a value for that division.
If you make another system of arithmetic you can define it how you want and be consistent with "regular maths" for the operations in which things are defined. It's an extension.
> We have "NaN" for a reason
Funnily enough, IEEE754 defines 1/0 as positive infinity, not Nan. But none of these things are "the truth" in any reasonable sense of the word, just "useful systems".
Defining it as 0 at least means floating point numbers are (presumably) closed under arithmetic operations, which could be handy.
Ah, of course. Still, this definition gets you to something stricter still -- the "things that normal people think of as numbers" are closed under those operations. (Depending on what happens for overflow and 0/0, I guess.)
The story of what happens if we defined, for example, 1/0 = 0 is a little more complicated. I will include it because it is sufficiently interesting.
Equality of fractions is defined differently when zero divisors are allowed in the denominator. A zero divisor is a non-zero number that can multiply with another non-zero number to get zero. For example, if we work mod 12 then 3 is a zero divisor because 3·4 = 0.
If we want to allow zero or zero divisors in our denominators then we say that a/b = c/d if and only if there is some value s such that s·(a·d - b·c) = 0 where s is anything allowable in the denominator. If we are working with the integers, including this s term does nothing because s has to be something that can be a denominator and we only allow non-zero denominators.
So, even if we define 1/0 = 0 then literally every fraction would be equal to every other fraction.
These conventions can be broken (like, for example, addition of floating point numbers is not associative as pointed out by other comments) but it is definitely not "natural". In other fields of mathematics, like measure theory, it is possible to define things like "zero times infinity is zero" which is traditionally undefined but is a convenient shortcut and does not break anything that people working in measure theory care about.
IMHO I think the problem here is that most people focus on the "wrong side" of 1/0.
I mean, what the mathematical definition of division says is not that 1/0 is indeed something and that that something is "undefined" or "NaN" or anything else really. What it says is "I cannot do 1/0, the division operation a/b does not apply when b is 0".
So 1/0 is not a thing in itself in mathematics; it's something which cannot be operated.
Now, back some time ago, "division by zero" simply threw an error. It signaled "this is not something that can be done". undefined, NaN, anything else, including 0, is not really something that has a mathematical justification. It's merely a practical approach to encapsulate that error into some form of pseudo-value to control it to some extent.
Personally, I don't really see how 1/0 = 0 is better than 1/0 = NaN or "undefined" or "Infinity".
The practical downside is that it might make equations go slightly wonky, rather than completely wonky if a non-number is returned.
For example, say if a zero divide happens with normalized values. Meaning the immediate result is only off by one at most. Odds of catching that are probably low. Meanwhile, a NaN will infect all numbers that come into contact with it, bubbling up faster...
Under this system, programs are easier to debug if they use bigger numbers... That property does not seem like a win to me.
If I attempt to open a file, but an access fault occurs, I'd rather be told the fault and given a chance to recover than receive an empty file.
It seems straightforwardly better to me, because the programmer/user gets an immediate "you messed up" signal, instead of silently returning the wrong values until someone eventually (hopefully) notices.
Are we reading the same FA? In the section titled "The Real Mathematicians" the author has quotes of mathematicians saying that defining division by zero as zero is OK.
I find particularly interesting Leslie Lamport's comment that "Since 0 is not in the domain of recip, we know nothing about the value of 1 / 0", which I think is the most correct mathematical stance.
Then again, I think it is all a red herring. This (Pony's decision) is not about the mathematical definition of division. This is about the trade-offs computational systems do to manage the situation.
"defining division by zero as zero is OK" And those mathematicians should be classed as wrong. If you treat zero as nothing, a value divided nothing does not make it magically disappear, it should just cancel out the mathematical operation. So 1/0 should be 1, 42/0=42 and so on. Otherwise to apply their logic, a*0=0 would have to be applied. Again if you take a value and multiply it with nothing, it should cancel out the maths operation. It is possible for whole areas of a science to follow down the wrong rabbit whole, medical science best shows this over the last 100-200yrs since scientific equipment has improved and the current thinking about something in the body has changed when new discoveries appear that contradict. Maths is not untouchable either, especially considering the fact Big G and the speed of light varies. Check out Rupert Sheldrakes of Cambridge Uni talks on this subject. The official bodies response was to mandate G and speed of light is now constant when its not.