Hacker News new | past | comments | ask | show | jobs | submit login
1/0 = 0 (hillelwayne.com)
650 points by ingve on Aug 10, 2018 | hide | past | favorite | 570 comments



My problem with "1/0 = 0" is that it's essentially masking what's almost always a bug in your program. If you have a program that's performing divide-by-zeroes, that's almost surely something you did not intend for. It's a corner case that you failed to anticipate and plan for. And because you didn't plan for it, whatever result you get for 1/0 is almost surely a result that you wouldn't want to have returned to the user.

When encountering such unanticipated corner cases, it's almost always better to throw an error. That prevents any further data/state corruption from happening. It prevents your user from getting back a bad result which she thinks she can trust and rely on. It highlights the problem very clearly, so that you know you have a problem, and that you have to fix it. Simply returning a 0 does the exact opposite.

If you're one of the 0.1% who did anticipate all this, and correctly intended for your program to treat 1/0 as 0, then just check for this case explicitly and make it behave the way you wanted. The authors of pony are welcome to design their language any way they want. But in this case, they are hurting their users far more than they are helping.


An issue with D that has repeatedly engendered heated debate is what should happen when a programming bug is detected at runtime. The two camps are:

1. The program should "soldier on" if it can.

2. The program should go immediately to jail, it must not pass Go, and must not collect $200.

I'm solidly in the latter camp. If a program has entered a state unanticipated by the programmer, then there is no way to know how it entered that state (until it is debugged), and hence no way to know what the program might do next (such as load malware).

1/0 is a bug, and the program should immediately halt.


> If a program has entered a state unanticipated by the programmer, then there is no way to know how it entered that state [...] and hence no way to know what the program might do next (such as load malware).

... or kill the patient ( https://en.wikipedia.org/wiki/Therac-25 ) or set the company bankrupt overnight ( https://en.wikipedia.org/wiki/Knight_Capital_Group ), or simply delete all your data (happened to me with some buggy media player that though my $HOME directory belonged to its download cache).

Programming languages don't generally allow us to "handle", from inside the faulty process, stuff like NULL-pointer dereference, double-free, or segmentation faults. And that's a good thing, because as you said, at this point, there's a very high probability the program is in some rogue state.

(Making special cases for NullPointerException or OutOfBoundsException, like some languages do, is, IMHO, a bad idea, that spreads the confusion between programming mistakes (i.e coming from the source code) and invalid runtime conditions (coming from environment). (I'm avoiding the ambiguous terms "errors" or "bugs" here)).

Making "divide by zero" non-fatal belongs to the same family than making "((int)0)" return zero, or making "((int)0) = x" a no-op. At first, it seems like this will only hide programming mistakes ; but this impression comes from my current programming habits, which are tailored to avoid these situations.

But maybe there are advantages in being able to write these things on purpose (at the cost of losing the runtime detection of some programming mistakes). After all, I'm perfectly happy with "free(NULL)".


> (Making special cases for NullPointerException or OutOfBoundsException, like some languages do, is, IMHO, a bad idea, that spreads the confusion between programming mistakes (i.e coming from the source code) and invalid runtime conditions (coming from environment). (I'm avoiding the ambiguous terms "errors" or "bugs" here)).

Actually if you check out the interviews with Java designers the checked exceptions mechanism was designed for that purpose: unchecked exceptions are for bugs that should never happen in a well written code while checked exceptions are for conditions that can happen regardless to how good is your code (e.g. i/o errors).

See: https://www.artima.com/intv/solid.html

There are of course also Errors for fatal conditions that should been be caught (actually in CLR even if you catch the exceptions will be re-raised automatically at the end of the handler).

Another interesting thing in CLR is Constrained Execution Regions that allow you to run the cleanup code reliably even when such a fatal condition is encountered (but the code to be run is limited e.g. it cannot allocate).


The mis-categorization of some exceptions[0] as checked or not is part of why checked exceptions seem like a mistake. The other is when exceptions cross boundaries of human organizations like nested libraries.

[0] NumberFormatException is a RuntimeError as are all IllegalArgumentException, WTF?


Another is that well-known attempts at checked exceptions in practice provide an incredibly limited vocabulary for talking about raised exceptions - usually just unconditional "might raise X" or "does not raise X". If you could say things like "raises anything raised by my argument f, except for X because I catch that" it seems like it might be a much better idea (especially if you can infer those...)


> (Making special cases for NullPointerException or OutOfBoundsException, like some languages do, is, IMHO, a bad idea, that spreads the confusion between programming mistakes (i.e coming from the source code) and invalid runtime conditions (coming from environment). (I'm avoiding the ambiguous terms "errors" or "bugs" here)).

And that's why I like dual error mechanisms, like the panic vs manual error handling mechanism in Go (despite all its verbosity). It's very important to make the difference between an external error (that should be considered something "normal" to deal with) and an invalid state (that should lead to a halt of the program, maybe after logging something, or before trying to restart the program from zero).

Traditional expection mechanisms (à la try/catch) make it too hard to deal with an external error, and too easy to think you can recover from an invalid state.


D draws a clear distinction between programming bugs (not recoverable) and environmental errors (which can be recoverable).

Failures of the former throw an Error, the latter Exception.


Great analysis! I think that summarizes a lot of dev arguments. That said, Erlang appears to agree with both camps. If an error occurs, the _process_ immediately crashes (not OS process, it's Erlang Kingo for an actor or a green thread), but the _program_ soldiers on as if nothing happened. The idea behind the design, I believe, is that programmer errors are unavoidable, but that eg a bug in some edge case triggered by uncommon data shouldn't crash the entire server.

I don't know much about Pony, but given that it is also actor based, I'd imagine this would be a great opportunity to crash early and eagerly. It gives you all the advantages of catching a bug early while not crippling a production app under what might be a pretty low-impact, low-frequency bug.


Java has essentially the same behavior with threads. If an error occurs in a thread, it will throw an exception that will terminate the thread if uncaught. Other threads are not directly affected. This is typically what you want as threads are usually independent of each other, often processing an isolated request, which may encounter a bug due to bad data or other conditions. Contrast this to languages like C++, where the usual behavior is to segfault and crash the process.

Special care must be taken if threads are manipulating shared data structures, especially inside a "critical section" where the data structure invariants must be maintained. This is why automatically terminating the JVM when OutOfMemoryError occurs is a best practice, as code is often not paranoid enough to handle it.


One big difference between Erlang and Java is that all processes (threads) in Erlang are safely contained in private memory, whereas in Java threads occur in public memory. Another big difference is that Erlang has supervisor processes to restart child processes that have crashed. Java just farts itself.


Indeed. Java is in fact a sort of worst-of-all-worlds in that it's even possible to catch things that are really unrecoverable errors like StackOverflowException or OutOfMemoryError. To to uninitiated it might not like appear that catching these is that bad, right? In addition to the problems with finally clauses not necessarily running to completion under such circumstances catching these exceptions can cause extremely hard-to-debug deadlocks because object locks won't be released properly.

This is an area where the design of Java/JVM is badly screwed up.

[1] JVM, really. It's pretty unavoidable on most (all?) JVM languages.


In C++ the default behaviour is for the runtime to call std::terminate, not segfault. It's easy to catch exceptions by wrapping the callable in a packaged_task and then the exception is available in the future. The key point is that the exception is rethrown when accessing the result through the future.

Swallowing exceptions is a really poor idea. Does Java really do that?


Erlang originates from telecom, where the behavior you describe makes perfect sense. Equipment should work all the time; you don't want to (or even can't) send out people to debug/restart them.


It's not just programmer error, it can soldier on through some hardware errors, rogue cosmic rays, and arguably the hardest - other people's software errors, like os errors, driver errors, library errors, network stack errors...


Regarding "some hardware errors": This might include operationg conditions you didn't consider; say:

The divisor in your code went through a non-ECC DRAM module (maybe in a disk drive) that runs hotter than its specified operating temperature would allow and a random bit-flip changes your `1` into a `0`.

On this topic, the talk on Bitsquatting from Artem Dinaburg during Defcon 19 is worth watching. The most interesting part on bit flips starts around 15:05 minutes: https://youtu.be/9WcHsT97suU?t=15m5s


Adopting this policy was what destroyed the first Ariane 5.[0] IIRC, a conversion from floating-point to a 16-bit integer overflowed, raising an exception which reset the flight control system. The flight control system was redundant, but both copies experienced the same error. The result that was being computed would not have been used.

The lesson I took from this is that neither of these two options is acceptable in software that has to work all the time, like much real-time software. Instead, we should detect all the bugs in such software before runtime, which is only feasible for small systems such as seL4[1]. So we should diligently minimize this software.

For other programs, the best way to handle an error is context-dependent. If a rendering bug will result in a glitch showing in a texture onscreen, that's preferable to exiting the game in the middle of a raid. If a PID motor control system for a robot arm has a division by zero, halting the program without first slowing the motors to a stop could be catastrophic, even causing fatalities. And of course there are numerous cases where continuing execution in the face of such an error is far worse.

[0]: https://web.archive.org/web/20000815230639/http://www.esrin.... [1]: http://sel4.systems/


To say that either (1) or (2) is always the right choice is unreasonable. It depends on context.

If a program is running under a supervisor, it makes sense to bail, as a new clean instance will become available.

If it was the code of the life support machine that displays data on the LEDs that I was connected to, I prefer that it solder on rather than stop functioning altogether because data for one of the digits was out of 0-9 range.

I'm sure there's been many instances where spacecraft had partially malfunctioning software that was remotely corrected because it didn't entirely give up but rather continued to accept input.


Aircraft and spacecraft all have backup systems for critical software. When software self detects a bug (such as an assert failure) the offending computer is shut down and electrically isolated, and the backup is engaged.

What is absolutely NOT done is soldiering on with a computer in an unknown state.

Any software system that is life critical is either designed this way or is very badly engineered. I've written a couple articles about this:

https://www.digitalmars.com/articles/b39.html https://www.digitalmars.com/articles/b40.html


And what if the program is the lowest level and getting into an impossible state due to a hardware issue (bad bit). At some level a program has to soldier-on.


Then it sets an I/O pin that pulls the reset switch.


I cant tell ic you're being serious. The appropriateness of that would depend on whether the hardware condition was persistent in which case we have a reboot loop. It doesn't seem resonable for software meant to run in a harsh envirinment to assume prefect rumtime conditions. Do probabilitites and tradeoffs ever get considered when faced with inconsistent state?


If the program is designed to "soldier on", then why not? The question is how is the program designed to react in face of failure: should it fail safe? And is it safe to crash on any error?

My point of view is that a program should abort if it encounters an irrecoverable error, such as imminent memory corruption. However, if it's designed to fail functional, then it could continue to work, either by reinitialising itself, or by aborting the current operation. However, it must be said that it's hard to design such programs, so the option of bailing out is very attractive.

Example: an out of bounds access is a typical programmer error. This should probably result in an abort during development, so that it can be fixed. However, it is possible to not want to abort just because something is not accessible when in production. The program could instead raise an exception and clean up the call stack up to its initialisation point, then notify the surrounding system that a problem was encountered and resume service. The surrounding system could then keep track of the encountered issues and try to perform a recovery after a threshold is reached: this is how Android recovery works for e.g. crashing system services, if I recall correctly, it first cleans some application settings, then system settings, then does an OS reinstall. Now in this example the trigger condition is a crash, but it doesn't necessarily have to be like that.


I can't say that I'm firmly in either camp.

What seems much more important to me is that the behavior is well documented. If I am working with a system where 1 / 0 is defined to be zero, I will deal with that just as I deal with other peculiarities like 1 / 3 being 0 and 1.0 / 3.0 only approximately being a third in some systems. It's a practical concern like many others in programming.


My comment from when this came up on Reddit, slightly edited for context:

`/`-by-0 is just an operation and it tautologically has the semantics assigned to it. The question is whether the specific behaviour will cause bugs, and on glance that doesn't sound like it would be the case.

Principally, division is normally (best guess) used in cases where the divisor obviously cannot be zero; cases like division by constants are very common, for example. The second most common usage (also best guess) is to extract a property from an aggregate, like `average = sum / count` or `elem_size = total_size / count`. In these cases the result is either a seemingly sane default (`average = 0` with no elements, `matrix_width = 0` when zero-height) or a value that is never used (eg. `elem_size` only ever used inside a loop iterated zero times).

It seems unlikely to me that `/` being a natural extension of division that includes division by zero would be nontrivially error-prone. Even when it does go wrong, Pony is strongly actor-based, has no shared mutability and permits no unhandled cascading failures, so such an event would very likely be gracefully handled anyway, since even mere indexing requires visible error paths. This nothing-ever-fails aspect to Pony is fairly unique (Haskell and Rust are well known for harsh compilers, but by no means avoid throwing entirely), but it gives a lot more passive safety here. This honestly doesn't seem like a bad choice to me.


An example well-defined use case is if you want to compute a harmonic mean, e.g.

x = 2/(1/a +1/b)

This is a form of average where you are giving increased importance to the smaller number. It frequently pops up in science/engineering, e.g. in hydrology when you are computing flow of water underground.

In this case, it's "obvious" that when e.g. a is zero, you want 1/a to be zero so you simply return b. There's typically a clear, big distinction between "smallest realistic number", for instance 1e-3, and a zero, whether actual zero or 1e-34.

For instance Numpy offers a nice way of doing this:

  def safeInv(a):
    tol = 1e-16
    return np.division(1.0,a,out=np.zeros_like(a),where=np.where(a>tol))
This tells numpy to put the result of division into an array of zeros shaped like a, but only do the divide where a>tol, so it returns 0 where a<tol.

In this case, though, the programmer understands this division is somehow special and should allow div-by-zero, so it is handled specially.


Quite the opposite I would say: the harmonic mean can be seen as a very good justification for `1/0=inf` as the pragmatic choice.

If, say, `a` is mathematically very small (and `x` is accordingly expected very small`) but `a` become zero due to rounding behavior, then having `1/a=inf` results in `x=0` which is arguably closer to the expected result than e.g. `x=2b`.

As someone involved with numerical methods, I have very often relied on `1/0=inf` because it would ultimately warrant the proper behavior in case of rounding errors in the denominator.


I think for floating point 1/0=Inf makes a lot of sense, because divisions are frequently done with continuously-varying properties. Division by integers is different, since you do a different kind of work with it; there is no "very small" integer.


"In this case, it's "obvious" that when e.g. a is zero, you want 1/a to be zero so you simply return b"

b obviously isn't what you want from this harmonic mean when a is 0. And in any case x will be 2b if 1/a is 0, not b.


So, I did forget the correction such that you don't get 2b. The general case is described e.g. in the link below, and this is known as a "zero-corrected harmonic mean". It is actually "obvious" in some cases, but I'll concede that in other cases it's not.

https://www.rdocumentation.org/packages/lmomco/versions/2.3....


Eh? That page says nothing about 1/0 being 0, or anything like that. That function takes the harmonic mean of the non-zero samples, and then adjusts the result by multiplying it by the fraction of the samples that are non-zero. It purposefully avoids dividing by 0. And the result of that function for a==0 and b!=0 is b/2, not b or 2b. So not only is what you claim to be obvious not obvious, it's flat-out wrong.


> `average = 0` with no elements

Imagine a Mars lander program that looks at the average of altitude measurements to decide when it's ok to jettison the parachute, sees no measurements, and decides that altitude is zero.


You have a point, but I think this is a difference in the target markets. NASA can afford to have a thoroughly-tested, redundant, quickly-accessible failure path that restarts and reinitializes the process, and their processes are so thoroughly tested that they can expect their production instances not to have any software failures.

Pony is made for a more typical scenario where every failure case is a new, untested error path, and history shows that general developers handle errors badly. Pony makes a heroic effort to remove the error path from the language semantics entirely: every failure is explicitly annotated in the code, and has to be handled there. This is much better for the typical developer, allowing them to produce very robust code without NASA's degree of investment towards it. Giving Pony a special unwinding mode just for division by zero would be very dangerous, because this is a new path that only exists for a rarely-encountered sort of error; you cannot assume a sudden shutdown is going to do less damage.

Having Pony present the error cases inline on division by zero would certainly be safer, but there is a degree of disproportionality here: most divisions are impossible to go wrong in this way, and I suspect most of the rest are benign or would be quickly caught by the check after. Since Pony aims to compete against incumbents that are generally less safe in this regard (C++ has UB, Python has exceptions but makes it hard to handle them correctly, etc.), a compromise seems sensible.


It's not only NASA. Zero is just not a reasonable default value for an average. Imagine average credit card balance for FICO score, average blood pressure, average temperature in a freezer, etc.

UB is not the only way. You can have NULLs (they come with their own sets of problems, but still).

In many cases no value is better than incorrect value.


It's a value type, so you can't really have NULL. I don't see the issue with those examples; if you have no samples, there's nothing to mislabel. An average blood pressure of 0 across 0 patients is only going to harm 0 patients.


> I don't see the issue with those examples

Let me elaborate then.

Average credit card balance predicts probability of default. Joe who maxed out his credit card is higher risk than Jane who pays back her entire credit card balance every month. Now we have Jack with no credit card. We predict that Jack is low risk because his average credit card balance is zero.

Second example, defibrillator that monitors blood pressure and shocks the patient when his heart stops (blood pressure drops to zero). We attach the defibrillator to a patient, turn it on, and it shocks the patient immediately because there are no blood pressure measurements yet and therefore it thinks the average is zero.

Third example, a thermostat that turns on the freezer if average temperature is above a set threshold. It never turns on because average (of zero observations) is already zero.

Of course you can hard-code handling of those special cases, but if you did not, failing would be preferable to continuing to work incorrectly.


It seems to me taking averages would be inappropriate in all three of those scenarios.

Credit risk is based on total debt, and since credit cards are a revolving line of credit, the entire credit limit is considered debt, even if the balance is zero. That's why you can improve your credit score by closing out a credit card that you never use and that carries a zero balance (which we did in order to qualify for a mortgage several years ago). In other words, it's a sum, not an average; there's no division involved.

Setting aside the fact that there are much more reliable ways to detect a stopped heart than blood pressure: If you're taking a running average of BP over time, a low reading after a heart attack would just be a single data point, and you'd probably have to accumulate a bunch of them in order for it to register (particularly if you got a high reading just before the event). Even if you take a reading every five minutes, which is ridiculously frequent for BP, the patient will be long dead by the time you notice. You should be acting based on the last reading; again, no division involved.

I don't see why a freezer would be based on running averages rather than the most recent reading either. Thermostats generally work off of two thresholds: a higher one above which the compressor turns on, and a lower one below which it turns off. That smooths out any measurement noise without taking averages. Even if you do use averages for some reason, though, presumably it will start measuring at _some_ point and the compressor will turn on; I don't see why the average would stay at zero.

I'm not convinced that `1/0 = 0` is correct in any meaningful way, but I feel like any situation where it would cause bugs more critical than a UI issue probably points to a deeper design flaw. After all, if the alternative is to crash on a failed assertion, that's not necessarily preferable in a life-or-death situation.


Everything is relative. $1,000 balance is a lot of money for someone making $35,000, not so much for someone making $350,000. Many independent variables that go into risk scores tend to be ratios or averages, hence zero (or near zero) denominator is possible.

It is normally addressed with floors and ceilings or something like Laplace rule of succession, so that no data at all results in a reasonable number. Very rarely that reasonable number is zero.

I'm not saying taking the average is necessarily good in the examples given, just that if you are computing the average, then that computation should not silently return zero if there's no data to average.

That's the way AVG() in SQL or mean() in R work, they return NULL or NA rather than 0. If you know that NA should be 0 you can explicitly COALESCE, but that should be your decision rather than the default behavior.


I feel for the typical target market for Pony is going to be doing operations like that in floating point, or perhaps some explicit fixed-point non-integer type in financial cases. Whilst, yes, I can see this causing issues in some small subset of cases, the question really does come down to cost-benefit. Integer division by zero is not the only way for arithmetic to go wrong; would you expect every overflow to be checked too?


https://en.m.wikipedia.org/wiki/Credit_score_in_the_United_S...

Credit scores don't work this way. Zero is not a valid credit score, and the things you mentioned describe different components of the score.

I work a lot with survey data, to be interpreted by humans. In this problem domain, zero is a sane value for an empty average, though "N/A" is usually better. I wish our language would let us define it as zero so long reports don't die in the middle. But it's about your problem domain, as usual.


There are many different credit scores. The range of valid FICO scores does not include zero, but internal acquisition and behavior scores that banks use might.

But in this case it's irrelevant because average balance is independent variable (input of the scoring model).


You can check for 0 and throw in those cases, and then propagate the error path appropriately. And as they've already said in this thread, they will be providing a checked arithmetic in the standard library to do that for you.


Well, sometimes the average is 0, e.g. (-2 + 2)/2.

And the whole point is that this is meant to catch unforeseen interactions as soon as possible. If you add a check it's no longer unforeseen, and it may easily slip the programmer's mind.


If I'm not mistaken, jeremyjh was referring to checking for 0 in the denominator, not the numerator.


Pony requires programmer to handle everything case of every function...except this one case where it will just pick an arbitrary value to handle a case the progrmammer forgot, hiding the very error Pony was designed to guarantee gets solved. Why not allow partial functions everywhere and have them default to 0 where undefined?


That's a terrible default for an average function lol. Once again in nearly all cases, taking the average of a list without elements is an error.


What happens for 0 / 0? Is this also 0?


It's extra zero.


Is this the same as 0 * 1 / 0?


It returns a larger value of 0.


All zeroes are equal, but some are more equal than others.


ε


inf 0


I almost want two different division operations: One where 1/0 = 0, exclusively for use in progress bars and stuff like that, and another one for everything else.

Because frequently division by zero indicates a bug. But similarly frequently, I end up crapping out annoying little bits of code like

  if (foo == 0):
    return 0
  else:
    return bar / foo


I'm reading the "Pony" tweet quoted in TFA and your comment and I'm left very puzzled: is that really that common to want x / 0 == 0 ? In practice where does that crop up?

You say that you frequently have to write your little shim but honestly I don't remember writing code like that in recent memory.

You talk about progress bars, I suppose it makes sense if you somehow try to copy 0 elements for instance, and you end up dividing by zero when computing the progress, like:

    pos_percent = (copied_elems * 100) / total_elems
And both copied_elems and total_elems are zero. But in this case wouldn't you want the computation to return 100% instead of zero?

It's also a bit odd because it introduces a discontinuity: as the divisor goes towards zero the result of the operation gets greater until it reaches exactly zero and it yields 0. Wouldn't it make more sense to return INT_MAX (for integers) or +inf for floats? If you're writing a game for instance it might work better.

I guess it just goes to show that it probably makes a lot of sense to leave it undefined and let people write their own wrapper if necessary to do what works best for them in their given scenario.


Practically, it's quite common to not immediately know the divisor. In cases where the divisor is initially unknown but takes an imperceptible amount of time to compute it's better to render 0%. Otherwise you might get a flash of a full progress bar (for example) while the divisor is determined.

Of course, it's context dependent. As others mention, your code might be full of stuff like X / (divisor || 1).


The nicest UX would be to say something like "waiting..." while the denominator is zero. Presenting a zero to the user instead is probably an acceptable short-cut, but at the end of the day, it's a UX decision. Which is why it should be handled in explicit if-else code, that lives as close to the UI as possible, rather than being buried in the semantics of basic numerical operators.


If you dont know the number of elements, shouldnt that be tracked in another variable, instead of reusing the total elements variable and assuming 0 elements means unknown?

Also, if loading total elements takes a significant amount of time, shouldnt this loading also be reflected in the progess bar?


I’ve run into this a bit with progress related stuff. I bet if you looked at progress bar libraries they’d have similar logic built in.


    X / (divisor || 1).
It's invalid code in C, C++ and Java.


Boolean operators returning values is common, especially in dynamically typed languages. The code is valid in JavaScript, for example, and in Python (except it uses "or" instead of "||").


In python, it should probably be corrected with // or 1.0 to make it either an integer division or a float division.


Ah, that's true. Good point.


This is valid C and C++.


It is, but it will always give you a divisor of 1 (because "true" is 1 -- I also thought it would work until I tested it):

    % cat >division.c <<EOF
    #include <stdio.h>
    
    int main(void)
    {
            printf("1/0 = %d\n", 1 / (0 || 1));
            printf("1/2 = %d\n", 1 / (2 || 1));
    
            printf("1.0/0 = %f\n", 1.0 / (0 || 1));
            printf("1.0/2 = %f\n", 1.0 / (2 || 1));
    }
    % gcc -Wall -o divison divison.c
    % ./divison
    1/0 = 1
    1/2 = 1
    1.0/0 = 1.000000
    1.0/2 = 1.000000
In Python this is not the case:

    % python3
    >>> 1 / (0 or 1)
    1
    >>> 1. / (0 or 1)
    1.0
    >>> 1 / (0 or 1)
    1
    >>> 1. / (2 or 1)
    0.5
    % python3
    >>> 1 / (0 or 1)
    1.0
    >>> 1 / (2 or 1)
    0.5


Some trivia: GNU extensions to C/C++ include a "?:" operator that does what you'd want in this case, e.g.

    x / (divisor ?: 1)


It shouldn't be valid in C++. booleans cannot participate in arithmetic operations. You will get a warning from the compiler if you are lucky.

C doesn't have booleans and treat them as integers 0 or 1, it can do the math and will always return 1.


That's not true. C++ does define an implicit conversion from bool to int:

https://en.cppreference.com/w/cpp/language/implicit_conversi... (under "Integral promotion")

And C has a boolean type as of C99.


What do you expect for that C code?


In the context of this discussion, the idea was that it would act the same way as Python. But it obviously doesn't.


Real example: I'm collecting some quality signals from a corpus, most of which are some form of ratio, average, weighted average, or scaled average of counting various quantities within the documents. If the elements being counted are missing, I want the term involving that quality signal to disappear from the final ranking calculation.

Defining x / 0 = 0 gets this behavior for free, while leaving zero as an exception means it has to be caught for every single signal calculation, which is a pain when there are potentially dozens of different signals, all of which are counting different things (and have different divisors). I've actually defined a helper function to do this automatically, which also lets me change the default "null object" easily if I choose a different representation or want to apply some baseline value.


Defining a function would seem like exactly the right thing to do! There's no problem.


Module-scoped operator overloading would be really, really nice in this situation, though.


No, it probably wouldn't, unless you were using integers. Pony still uses floats and returns NaN or Infinity or something.


>It's also a bit odd because it introduces a discontinuity: as the divisor goes towards zero the result of the operation gets greater until it reaches exactly zero and it yields 0. Wouldn't it make more sense to return INT_MAX (for integers) or +inf for floats?

Unless you're using unsigned ints, the discontuity will exist no matter what you do, because of negative divisors.


Rust has your back:

pub fn checked_div(self, rhs: u8) -> Option<u8>

Checked integer division. Computes self / rhs, returning None if rhs == 0.

You would use this as lhs.checked_div(rhs).unwrap_or(your sane default)

This is dramatically better than always returning zero silently, as doing so is bound to be wrong in certain cases. If you run into a situation where you are afraid your RHS may be 0 but still want to do the right thing, this is what you'd use.


This is actually included in the IEEE754 floating point standard - there's a concept of "trapping" vs. "non-trapping" exceptions, where implementations are allowed to decide which exceptions should trap.

Unfortunately, GCC only lets you enable this globally (within a program): see http://www.gnu.org/software/libc/manual/html_node/FP-Excepti....


We will be introducing two sets of integer math operators in Pony. The current which are non-partial and can over/underflow + division by zero == 0 AND new ones that will be partial functions that cause an error on under/overflow and division by zero.


> new ones that will be partial functions that cause an error on under/overflow and division by zero.

A "strict" mode is usually always an afterthought once we have errors & people run it in strict mode and facepalm.

One of the first "utilities" I wrote for PHP was something called "pecl/scream", which turned off all the "unchecked" operations across the whole VM.

And in general, this works out poorly for code quality.


There is no golden path here. Checked exceptions for division would make a lot of Pony code objectively worse. Unchecked exceptions are great for PHP, Haskell, Rust or Go, but Pony is trying to do something different - to literally make it impossible to panic in an operation without describing that in the type system. The ergonomics of divide by zero in this context are absolutely debatable, not an issue to dismiss out-of-hand.


Well, except for kernel panics and hardware errors...


You can write user code in Pony that will cause a kernel panic or hardware error?


Yes. Hardware errors, can't really protect from that or a kernel panic. A kernel panic is a panic in non-pony code.


> You can write user code in Pony that will cause a kernel panic or hardware error?

You can always (as in OS with lazy allocation) allocate enough memory to get OOM no matter how safe your language is.


It may be forced to error as a result of one.


Pony is not 1.0 yet. The language is still malleable.


Couldn’t you alternatively introduce total operators that just grow the memory size as necessary instead of overflowing? For me if you’re handling wrapped types, I’d always prefer the numbers to grow into extra memory instead of overflowing or raising exceptions. Erlang, for instance, does this well. Obviously, division may still not be total, though you could define an operator like `//` that is.


A wrapped types library with the corresponding performance tradeoff is something I expect will be added to Pony at some point. There's definitely value in them for a variety of use cases.


Isn't a ternary here nicer?

  return (foo == 0) ? 0 : bar / foo
or even

  return bar / max(1, foo)
in the case where foo is integer or tiny foo would overflow your range anyway


I would say it doesn't really matter much as both are readable. I would prefer either the original if/else or the ternary solution over the last solution as I personally think the intent is not as immediately clear in your "max" form. I would optimize for readability over density, unless there is some reason to write it differently (e.g. performance).

just for fun, in kotlin (making use of expressions removes some verbosity):

    return where(den) { 
       0 -> 0 
       else -> num / den 
    }

    return if (den == 0) { 0 }  
           else { num / den }


Ternary doesn't really look any better. I'd do this in a language that supports it though:

  return foo && bar / foo


I've run into this a lot, but I'm also not sure what I really want to happen. Even for progress bars, the behavior can be different depending on how exactly you're obtaining the denominator -- does it increase dynamically or is it known from the beginning? Is it one of those fake progress bars that just ignores the fact that it'll stay at 100% for the next hour, or would it filling up genuinely mean that the work is actually done? After all, if you have 0 operations total and you've finished 0 of them, are you really 0% done? Wouldn't it make more sense to say you're 100% done? At the end of the day the user is trying to figure out how long they should keep waiting, to which the answer would be "you don't need to wait, there's no more work left to do".


> But similarly frequently, I end up crapping out annoying little bits of code

Isn't this reason the the point of having a utils library with commonly used pieces of code?


Some utilities libraries can be reflections of deficiencies in the language itself and not simply about common code reuse.


Agreed. Both really.


Most of the time when I have to think about what to do with a 0 divisor, I come to the conclusion that

  if (foo == 0):
    return MAX_FLOAT
  else:
    return bar / foo
is more sound for the given algorithm than "return 0" (still crappy, but more sound).

Then use MAX_FLOAT as an error flag instead of 0, which could have been the result of a legal operation (with bar==0).


Wait, when creating a progress Duke you divide in other direction: (things already done) / (all things). Can you give a specific example when it causes problems? My experience of writing code like that is that there a clearer way or I made a mistake somewhere.


Honest question. What is so annoying about that code? It's handling (in an application-specific manner) the special case of dividing by zero (which the `/` operator doesn't handle). I'm not sure of any other way to handle this.


Actually, bar/foo STILL has a gotcha which can throw exception, many developers doesn't think of this.

Just try INT_MIN/-1, can blow most "foolproof" divide logic away.


> But in this case, they are hurting their users far more than they are helping.

How could you possibly make an assertion like this without knowing why they made said choice[1]? Per the article, they didn't do it for fun, or to avoid exceptions (as you point out, more like defer exceptions). FTA:

> As I understand it, it’s because Pony forces you to handle all partial functions. Defining 1/0 is a “lesser evil” consequence of that.

Do you know enough about Pony's language design and decision process to support your claim that this decision (and all its second-order consequences) is hurting users far more than helping them?

[1] On the off chance that you are familiar with the tradeoffs Pony made and do know what you're talking about: care to clarify?


Hi I'm on the Pony core team and spent some time explaining the reasoning to Hillel. He hasn't written any Pony but I did walk him through how we ended up where we currently are.

I will be writing up a post about why we made this decision.


The real issue is that Pony doesn't have the concept of an unrecoverable error. All exceptions in Pony are checked, but sometimes if an error happens there's nothing your program can do and it should just crash.

Edit: Although Pony claims that programs should never crash, the language designers clearly understand the pragmatism of an unrecoverable error because it happens on OOM and stack overflow


It happens in scientific computing with zero bounded signals fairly regularly. Sometimes values hover close enough to zero that the computation results in a/0. It's not desired but the only alternative is a guard that invariably adds a performance penalty. There are other ways around it, such as normalization, but they're not trivial depending on the problem and can actually add other sources of bugs/errors. I understand the author's intent even though I disagree with the result.


1/a when a is close to zero is very large. 1/0 == 0 violates the limit and is just bad math.


Totally agree, in a mathematical proof you wouldn't ignore the edge case just because you think it might work the same way it has for some of your other proofs. It would invalidate everytihng, and in sequential processing, you could really mess up some of the data with one iteration with a 1/0 assumption that completely changes every iteration after it. Bad news.


What the world really needs: "1/0 = 0" in production build, "1/0 throws error" in development build...


That makes your bugs invisible in production. You could be failing every single transaction in the real world, and never know if your tests don't tickle the unhappy case.


Fairly straightforward in some languages (Ruby for instance).


The authors covers this at the end:

[1] The author has a dislike of the notion that 1/0 = 0 in a programming language. The author identifies this as a personal preference; I'm sure that preference is based on falsiable notions, but the author doesn't elaborate. The author _DOES_ elaborate their motivation for writing the post: The post debunks the notion that 1/0 = 0 is 'bad math'.

[2] Pony chose 1/0 = 0 because the language of Pony is set up to enforce the coder to complete all functions; if you go with the strict definition of division, which is an operation which has no definition if the second operand is 0, it's not complete anymore. I agree with your sentiment; this feels like the wrong answer, but it's definitely not as simple as just saying: Why don't the pony folks redefine things such that dividing by 0 raises some sort of error condition?


Does Pony have some analogous dodge for logarithms and square roots of reals, or for the gamma function?


Yep. Having had a couple divide-by-zero errors in a complicated simulation, I'm immensely grateful that it fails hard.


> If you have a program that's performing divide-by-zeroes, that's almost surely something you did not intend for

It's such a common case to consider, I don't think that's compelling. Knowing that a division by zero throws an exception or casts to zero, is similarly considered.

> they are hurting their users far more than they are helping.

There's no evidence of this.


Is this different than integer division or integer over/underflow? I mean, besides that the integer behavior is status quo and 1/0 is not. Genuine question.


Integer over/underflow is arguably less arbitrary. Sometimes Z mod 2^k is really where you want to be working, and the rest of the time at least it's the fastest answer. Whether that's enough of a justification to be a difference? shrug


OK but that doesn't have a lot to do with what TFA is actually talking about.


You're absolutely right. Not only that, but it's impossible to distinguish a bad result like 1/0 vs a good result like 0/1.

Even Javascript is better with "NaN"


To clarify, in Javascript 1/0 is Infinity, not NaN.


You are correct. But it really should be NaN, since 1/ε is positive infinity, whereas 1/-ε is negative infinity.

Oh well :)


JS just assumes the limit direction for you, so 1/0 is Infinity, but -1/0 is -Infinity. 0/0 at least is correctly NaN. (Edit: And I just verified against my memory, Matlab (or at least Octave) does the same thing. While Matlab might get characterized as being for the ivory tower, at least it's had a long history of being used for practical math applications within the tower. Edit2: And anyway this is the defined behavior for IEEE floats. Men of industry use industrial standards. :))


Moreover, 1/-0 is -Infinity. IEEE float has two zeros, positive and negative.


Nope, I'd disagree. Zero isn't an approximation of some epsilon, it's really just zero. It makes sense for the output sign to match the input sign.


What is the sign of 0?

It's 0.


Parent was talking about the sign of the numerator. But... zero in IEEE 754 representation has a meaningful sign bit. 1/-0 = -inf.


Or as IEEE 754 sees it (you can try this in your JavaScript REPL).

  >> 1 / 0
  Infinity
  >> 1 / -0
  -Infinity


e is nearly 0, -e is nearly -0.

So 1/0 is infinity, and 1/-0 is -infinity.


It’s common for 1/0 = 0 to be exactly what you want. Array average for example. That said raising exception would be the only consistent behavior if a language supports that.


It is not valid to extrapolate an average of 0 from no data at all.

For example, if we examine a sample of zero elephants, then we end up estimating the average elephantine mass as being zero.

This shows to be wildly off as soon as we upgrade our statistical wherewithal to work with a sample size of one.

A center of mass is a kind of average. If we have an empty object made of no particles of matter at all, can we arbitrarily pin its center of mass to the (0, 0, 0) origin of the coordinate system we are using?


This is probably beside the point, but shouldn't the center-of-mass of a massless object be everywhere simultaneously? With no reason to prefer any location, over any other?


The center of mass of a massless object makes as much as "everywhere" being a location of anything.


And in other cases you really do want the infinities. For example fast ray-AABB tests like https://tavianator.com/fast-branchless-raybounding-box-inter...


Common compared to what? Taking an average of an empty array is nonsensical.


I disagree. It’s similar to computing NaN-mean or NaN-sum for an array of all NaN values (which returns 0).

Let’s say some program calculates the mean of an array and then adds that mean to some accumulator.

For purposes of updating the accumulator, the mean of an empty array is perfectly well-defined: it should add nothing to the accumulator (add 0).

This might be a major operating requirement for the mean function, such that rather than guarding or pattern-matching on an empty array and handling a failure is far worse than having a better 0-length convention.

Consider the difference between the “head” function of some List type, which has to either raise an exception or wrap the return value in some Maybe structure, because it’s literally not definable, vs the “length” function which has an obvious natural definition for empty arrays that is often highly preferred to some design where length(empty_list) throws an exception and everyone has to handle it in little bits of custom code to specify 0.

To me this topic is all about usability and not about some parochial claim that some operation is nonsensical.


> It’s similar to computing NaN-mean or NaN-sum for an array of all NaN values (which returns 0).

Why would you want to do this rather than validating understanding of the data before computing on this?

> For purposes of updating the accumulator, the mean of an empty array is perfectly well-defined: it should add nothing to the acculator (add 0).

That's not a mean, though, that's a quirk of how you decide to (incorrectly) compute a mean. What's the point of such a program? It should refuse to compute when it's given no values--that's clearly a category error.

Otherwise, you're not using a mean, you're using a mean-or-0-when-lacking-a-mean. Might as well not call it a mean at all so other people can read your code.

Yes, this is pedantic, but this kind of subtle changing meaning of terms is exactly what leads to bugs. Name your functions accurately.


From your response I can tell you don’t do much numerical linear algebra work.

Consider needs to vectorize a large column-wise mean calculation across columns of a large data matrix (where NaN values are sparse but appreciable).

The NaNs might be perfectly reasonable, expected pieces of data, but you still want to understand the distribution of the non-NaN data, and adding extra work to filter it out first might be hugely costly, or even actually wrong depending on what other operations the NaN data is planned to be passed to and how those operations natively handle NaN.

And simple columnar summary stats are just the tip of the iceberg. It gets much more complicated.

By no means is the solution of “diagnose why there are NaNs ahead of time and preprocess them accordingly” even remotely realistic in most use cases. This is why libraries like pandas, numpy and scipy for instance provide specific nan-ignoring functions or function parameters.


That's all fine and dandy, but conceptually it's wrong to say the average of an array is 0, and can and will lead to wrong results in a variety of cases. I'm sure you can think of a lot of these cases yourself. I think in the history of computer science we programmers have found that there are a lot of convenience shortcuts that make sense in a lot of cases but bite our asses in other. Implicit is fast and fun, but it's nice to have your seatbelt on when the car crashes. Going back to the average case, if you want an average function that returns 0 on empty arrays, fine. But that's not the average function, and you shouldn't call it that way, and names matter, you should call it averageOrZero or something like that.


Why is it conceptually wrong to say the average of an empty array is zero? My undergrad degree is in pure math and my grad degree is in mathematical statistics and I’ve never heard an idea like saying the mean of an empty array is zero is “conceptually wrong.”

You bring up the history of CS, but even there you have debates about what convention to use for defining 1/0 for function totality and theorem provers.

There’s no aspect of pure math derivation of number systems on up through vector spaces that definitively makes a zero mean for an empty array ill-defined. Whatever choice you make, positive infinity, undefined, 0, or any finite values, etc., any such choice is purely down to convention that depends precisely on your use case.


> Why is it conceptually wrong to say the average of an empty array is zero?

It’s not conceptually wrong, it just means the “mean” you’re referring to calculates a different value than the “mean” we’re taught in school. So, underlying assumptions about the differences in “mean” should be communicated where it’s used.


Sure, I agree they should be communicated. Like, in the docs for “standard” mean functions, and not pushed into “specialized” mean functions, since needing this particular convention is not remotely special, and is rudimentary and expected in 99% of linear algebra and data analytics work, which are the largest drivers of these types of statistical functions.


> It’s common for 1/0 = 0 to be exactly what you want. Array average for example.

Wouldn’t that be a (potentially very different) case of 0/0? If there are 0 elements, you wouldn’t have a nonzero numerator, right?


The mean of an empty array isn't zero though.


Hi,

I'm on the Pony core team. I will be writing in more detail about this decision. A few short notes until then:

1) no one on the team has ever been happy with ending up here, understanding why the decision was made involved understand how partial functions (one that can produce errors like division by zero) are handled in Pony and interesting ergonomic issues that can result that is a large part of what my post will be about.

2) this applies only to integer division wherein division by zero is undefined. 1.0/0.0 uses the available Infinity value.

3) Partial function math is coming to Pony that includes 1/0 as partial (ie error producing) as well as handling integer overflow and underflow for operations: `+`, `-`, and `*`.

4) It's very straightforward even without that those operations to define your own division that will be partial (ie return an error) for division by zero.

Edit to include link to the comment below because it contains a good bit of the decisions that have to make when dealing with integer math:

https://news.ycombinator.com/item?id=17736637

Which is part of what I will touch on in my post about the decision process that Pony went through to reach to state we are currently at (which I want to highlight, is one we intend to improve- see #3 above).


> 2) this applies only to integer division wherein division by zero is undefined. 1.0/0.0 uses the available Infinity value.

Oh jeeze... to me that almost nullifies all of the points in the article. It does an alright job explaining how 1/0 = 0 is at least consistent with other notions in the language, but to hear that the same logic can’t be applied to floats is just... well, objectively it’s a mess.


Hillel is talking about math. Unfortunately computers don't really do "math". For example, math doesn't have overflow and underflow to deal with.

The floating point standard says that division by 0.0 should be Infinity and provides a value for it. The integer math, all possible values are used for numbers, so division by zero is undefined behavior.

And from there, every language is potentially going to have a mess of inconsistency to deal with.

You can make 1/0 = Infinity so long as you box all your integers. That is, its not using the machine type, but I type that is a wrapper that can be the machine type or Infinity. And every user takes a large performance hit even if they will never divide by zero.

For some languages boxing integers is not a bad thing because they already do it. Why do they do it? Usually, because of another not math problem. Overflow and underflow:

What is 0 - 1? Well, its -1 right? Except of course if you have an unsigned integer, then its the max value of the integer. And what if you had 1 to the max value that the computer can represent? You wrap around. Some languages make all integers protect you from this and take a performance hit to be able to box and be able to represent numbers better as we expect them to work. In return, you can not do integer math as fast as you could.

Computers and numbers... its an inconsistent mess and a series of tradeoffs for any language of performance and various possible errors.

I'm going to talk about this and more in my post.


Integers are perfectly fine in math. But they are not a field. Integers modulo something are also well understood in math - but again not a field.


Integers modulo any prime are a field.


Powers of 2 are not a prime though, unless the power is 1 (i.e. we are dealing with single-bit integers).


Underflow and overflow and 2's complement are all just modular arithmetic, which is definitely math.


Computers is the math you get, not the math you want.


Here's another good one for you...

What does your favorite programming language return for 3/2 as compared to 3.0/2.0? Many will return different values.

EDIT:

for clarity of my point: integer math that uses machine types is pretty surprising compared to what we would expect from "math" in general for division.


`3/2 = 1` is a consistent answer as long as the programmer understands modulo. The way you're phrasing it makes it sound like there are languages out there that say "`3/2 = 2` because 1.5 rounds up to 2".

just so long as you don't expect a consistent answer between python2.7 and 3.x... >.>


The phrasing is relatively clear to me as saying that some languages say 3/2 = 1.5 (e.g. JavaScript) and some languages say 3/2 = 1 (e.g. C).


Did you mean 2/3?


No. But most programming languages I know of return different results for 2/3 vs 2.0/3.0.


In the mathematical sense, a Field is a Commutative Ring with the added constraint that the multiplicative operation has inverses for every non-zero element.

Floating point doesn't even form a Ring, because the operation is not associative[0]. Coq, for instance, defines the operation for QArith, which are the rational numbers. The representation is a pair (Z, positive), where Z is an integer and positive is a positive number (i.e., 1,2,...). QArith does form a field in the usual sense.

Integers are not a Field (unless you pick Z/p for some prime p), and machine words, signed or unsigned are even less of a mathematical object with the usual operations. So you can do almost anything you want to them from a programming perspective, including 1/0 = 0.

As Hillel writes, what to do will make sense in some settings, but not in others. You may want MAX_INT in some situations, and 0 in others. And this requires a check, partiality or not. Rejection of the case x/0 has more to do with the notion that it is usually a corner case where you want the programmer to think about what the result should be.

[0] The reason is that any addition can incur information loss, and the further away from 0 you are, the more information is lost. Thus, the order in which you add values matter.


As pointed out by others, floats work differently from ints in programming languages. Division in ints is usually defined such that if x = d * y + m, x / y = d and x % y = m. You can make this work by choosing the right m for every d, but when y is not 0, there are a few conventions you can choose to pick a sensible d and m. When y = 0, the m = a and d is completely free, so there really isn't a right answer from how division works.

Division with floats doesn't usually have a definition like that, so you don't end up in a similar situation. Plus, floating point math can have denominators get really close to 0 and have the result of the division tend toward the infinity value. Ints don't have values close to 0 or an infinity to represent what that division would tend toward.


As an aside, I have thought recently that integers should also have NaN value, just like floating point values have. It's useful in statistics to differentiate between not having a value and value of 0.

And I was thinking that perhaps 0x80000000 (in two's complement arithmetic) would be a good value for integer NaN. This value is already fixed point with respect to negation, so why not make it into an exception on its own right?

But it would mean to rethink all the hardware which I don't think is an option.

In any case, I think to set 1/0 to 0 is a bad idea, but I could imagine setting x/0 to 0x80000000 (interpreted as NaN) to make more sense.


I like the idea of having integer NaN but I think unsigned numbers are problematic as there isn't any convenient way to encode it.


>2) this applies only to integer division wherein division by zero is undefined. 1.0/0.0 uses the available Infinity value.

Ah, that seems rather reasonable. For whatever reason, the OP had me thinking that Pony was replacing the ordinary semantics of IEEE 754 floating point division (probably because he talked about fields and multiplicative inverses).

Having 1/0 = 0 seems like a very reasonable choice, especially if you have 1%0 = 1 as well. Of course, if anyone thinks they will actually encounter division by 0 in their code, and needs to treat it as an error, they can always test for it.


Having 1/0 = 0 and 1.0f / 0.0f = +inf makes sense in two applications I can think of. The first would be currency systems in which prices down to the pre-determined fractional component are represented as unsigned integers. In calculating margin requirements division by zero equating to zero would be the correct answer. And two, for double precision continuous variables, any application involving 3D geometrical transforms. An aspect ratio approaching zero in one axis, would approach positive infinity in the other.

So you may be saving some developers time in operator overloading, etc. More important than the actual binary decision itself is having well-documented behavior. Have not tried Pony yet, but will take a look. Good discussion, keep it up!


We are over in #ponylang on freenode if you ever want to stop by and chat.


This should be the top comment. INTEGERS are not a field. But that kind of makes the 0 choice worse? Even cpus always throw on integer divide by 0, so i don't really see the low level optimization. Does it enable some higher level optimization? Why not x/0 = x?


What about I64.min_value() / (-1)? Why does that give 0? If you're doing wrap-around arithmetic it should give I64.min_value().

https://tio.run/##K8jPq/z/PzG5JL9IwTcxM49LAQjyUssVkotSE0tSNV...


That might be an LLVM bug.

`I64.max_value() + 1` gives the expected result.

I'm going to have to look into that.

Thanks.


Apologies, I realise my comment was a bit snarky. Pony looks like a really neat language: I'll be interested to see what you do with it.

Unfortunately LLVM considers division overflow undefined: http://llvm.org/docs/LangRef.html#id109

If it's any use, you can see what Julia ended up doing here: https://github.com/JuliaLang/julia/issues/8188


I didn't take it as snarky. I was genuinely surprised. Thank you. If you intended it to be snarky, I didn't notice.

And thank you for the additional info, I've updated the issue I opened accordingly:

https://github.com/ponylang/ponyc/issues/2858


I'd like to jump in here and point out that Pony ergonomics are unlike that if any other language; it is significantly different from any other language.

If 1/0 = 0 sounds unusable to you, Pony's garbage collection scheme (which is brilliant, by the way) is going to make you think the language is insane.


1/0 is not infinity either...


If we assume that we got to 0 because of a rounding error we do know that it should be some positive integer. I think that 1/0 = infinity is a reasonable substitute for an actual value. 1/0 = 0 seems absurd. 1/(a number approaching 0) produces ever increasing integers.

I don't know, does it even matter? What happens when you try to divide a physical object into 0 parts? It doesn't create infinite pieces. It doesn't make the object disappear. Seems like nothing happened. Maybe when we divide by 0 in a program it should halt and catch fire?

I hate logical arguments, I get sucked in and start arguing all the positions.


>If we assume that we got to 0 because of a rounding error we do know that it should be some positive integer.

That's assuming we had rounding error from a positive number, not a negative number. Of course, if you are concerned about rounding, you shouldn't be using integer arithmetic.

>1/(a number approaching 0) produces ever increasing integers.

No. It produces a bunch of 0s, except when the denominator is 1 (where it produces 1) or the denominator is -1 (where it produces -1). Unless you're using a language like Python, in which case it produces -1 for all negative denominators.


Actually if your embedded system will stop keeping a patient alive if you crash and the zero value doesn't cause any harm then you don't crash.

There are lots of situations where it makes sense to keep computing even if you suspect the input data is wrong. It's not really good practice to always be crashing. You need to decide how to handle the bad input on a case-by-case basis.


I think "crash" in these discussions generally refers to "abort the current action and return to a safe state".


If you divide something 12 feet long into 3 parts, you don't get 4 parts, you get 3 parts that are each 4 feet long. So, if dividing a number by 0 yields infinity, you would expect that dividing something physical into 0 parts would yield 0 parts, each of infinite size, whatever that means.

Sorry to be pedantic.


It's fine to be pedantic in math. You can't divide something 0 times, there's no solution for that. You can divide something by infinite times, but infinity is not 0. This might help explain things:

https://www.math.utah.edu/~pa/math/0by0.html


but with floating point units, 1.0/0.0 is infinity.


That's not correct mathematically though. You can't divide something 0 times and get any number. Adding zero + zero + zero infinite times does not equal 1.


> That’s not correct mathematically though.

Floating point infinity is not equal to math infinity. Floating point infinity means there was overflow of the representation we are using, not that the number is larger than any known number.

> Adding zero + zero + zero infinite times does not equal 1.

Sometimes it does. That’s what an integral is.


Nope. Integral is the sum of infinitesimal values, not zeroes. If you dig into definition of Riemann integral, then you'll see that the are no zeroes in the sum at any step. Except for zeroes of function being integrated, of course.

For example Riemann integral is just limit of sums built on tagged partitions of a closed interval with largest sub-interval approaches zero. But no single sub-interval is equal to zero, their lengths just approach zero while being strictly positive values.

Calculus does not divide by zero, calculus explores what happens when we move to zero as close as we can and even closer.


"That's not correct mathematically though."

Yes, certainly it is.

"You can't divide something 0 times and get any number. Adding zero + zero + zero infinite times does not equal 1."

The discussion is about mathematics, not elementary school arithmetic.


Feel free to give me a proof that adds zeros and gets a number greater than 0.


It's plenty correct mathematically; the linked article even explains this.

Generally, your viewpoint comes from an incorrect understanding of mathematics, so let's explore what math actually is:

When we say "10 / 2 = 5", the result "5" doesn't come out of nowhere. It comes out of the definition of division.

There are several ways you could define division. You could interpret "10 / 2 = 5" as meaning any of:

• "If you had a 10-inch stick and you divided it into 2 equal parts, each part would be 5 inches."

• "If you had the equation 2 * X = 10, X = 5 would be a solution to it."

• "If you had 10 Pa of pressure in an enclosed box, and you increased the volume of the box to 2 times its previous volume, its new pressure would be 5 Pa."

• "A wave of wavelength 2cm would have a frequency of 5 per 10cm."

These definitions all lead to the same result – in other words, they're equivalent. When this happens, mathematicians usually don't particularly care which one is the "real" definition.

This applies to many different mathematical axioms. For instance compare the definition of the Parallel Postulate on Wikipedia:

"If a line segment intersects two lines, the two lines will intersect on the side that their angles with the line segment sum up to less than 180°"

https://en.wikipedia.org/wiki/Parallel_postulate

with the definition on Wolfram MathWorld:

"If you have a straight line and a point not on it, there exists only one straight line which passes through that point and never intersects the first line."

http://mathworld.wolfram.com/ParallelPostulate.html

These are different definitions. But since they lead to the same result, mathematicians don't really care to argue over which is the "real" definition.

Some definitions only apply over a subset of numbers. For instance, originally, "5 - 10" was undefined. Then negative numbers were invented, to expand the definition. The same thing happened with square roots and complex numbers.

That's the same thing with division. 1 / 0 was originally undefined. But you can come up with new number systems that define it, while preserving all the other properties that make division what it is. A common definition is 1 / 0 = Infinity, which is done in the Riemann sphere number system:

https://en.wikipedia.org/wiki/Riemann_sphere

There's nothing mathematically incorrect about doing any of this.


thanks for such a ginuwine response


Some people say "oh, that's easy, 1/0 is +Infinity". So the real fun is at 0/0.

The limit of x/y as x and y go to zero depends on which path across the xy plane you take towards the singularity. Along one approach, the limit is 0, along another approach the limit diverges to infinity, along yet another the limit is 17.

I'm not kidding! Go to https://www.geogebra.org/3d and enter "x/y" and spin the graph around. The "value" at x=0 y=0 is the entirety of the z-axis.

For another perspective, try http://www.wolframalpha.com/input/?i=z%3Dx%2Fy and turn on contour mode for the 3D plot. Notice how the contour lines radiate out from the z-axis; each of those is an "approach" to the singularity at a different z-value, and taking the limit along each line leads to a different value.


I always thought division by zero was, at best, + and - infinity, depending on the path, which is why we leave it undefined.

How would a path lead to 17?


  x=170, y=10
  x=17, y=1
  x=1.7, y=0.1
  x=0.17, y=0.01
The answer keeps being 17, even as x and y both get vanishingly close to zero.

I encourage you to play around with a 3D graphing calculator and see all the different paths you can take along that surface to reach the singularity. They all "reach" it at different heights.


you are explaining why 0 / 0 can approach 17, this is much less controversial, and is often considered the bottom element...


17 is a bottom element?


That's not what he said.


In calculus you're supposed to learn that y/0 with y != 0 is "undefined" (±Inf with limit expressions), while 0/0 is "indeterminate" (can construct any value using a limit expression that contains x/x with x -> 0).


Take the limit as x goes to 0 of (17*x)/(x). Both numerator and denominator go to 0, so this is a representation of 0/0. The limit is then equal to 17.


This just means that 0/0 = 1. It doesn’t mean that any number / 0 tends to 17.


But think about what a limit is. Think of X and Y as their own functions. so something more like X(g) / Y(g). As g increases, X and Y can take any path across the plane that they want. We get to arbitrarily choose what X and Y are. All we are doing is describe how our limit behaves as we slide along the scale towards infinity. Just because g increased by went from 10 to 11 doesn't mean that X and Y behaved the same way. We can arbitrarily choose X and Y such the limit as g approaches infinity is 0 for both, but the way that X and Y are changing is different. Like the example above, if we choose X and Y such that X(g)/Y(g) is 17 for all g, but the X and Y both approach 0 as g approaches infinuty, then as g approaches infinity X and Y go to 0, but X / Y (0 / 0), goes to 17. It's not a true 0/0, because the functions we chose got to 0 at different rates.


Right, it just means the limit of that path tends to 17. It's not a well-defined limit.

In a function of one-variable, there's a distinction between one-sided and two-sided limits. I don't know what the terminology would be for multivariable functions, but this is closer in nature to a one-sided limit.


24 / 2

17 / 1

1.7 / 0.1

0.17 / 0.01

0.017 / 0.001

...

0 / 0


Take the path (x,y)=(17a,a) as a goes to zero one dimensionally.


I like your point here. I think 1/0 = Infinity+ is a satisfying expression. Its a clear concept which can be visualized in a simple graph.

I think 0/0 is a different concept than 1/0. It's a different expression. 0/0 doesn't explicitly express a particular "path" on the graph.

We can call it (0/0) different names if we want. They can say 0/0 = "undefined". I am currently satisfied with 0/0 simply equals to 0/0, or simply "undefined".

If we all agree it is "undefined" we ironically defined it.


>I like your point here. I think 1/0 = Infinity+ is a satisfying expression. Its a clear concept which can be visualized in a simple graph.

Except it isn't. It still depends on which side you take the path from - from the positive denominator or the negative denominator side. (This is why IEEE 754 has a single signless infinity).


> IEEE 754 has a single signless infinity

No it doesn't. I don't know where you got that idea, but it can't be from ever writing any floating point code, or looking at the IEEE-754 standard, or the floating point format. Please see https://www.h-schmidt.net/FloatConverter/IEEE754.html and stop spreading radically wrong misinformation.

> It still depends on which side you take the path from - from the positive denominator or the negative denominator side.

In IEEE-754, 1/0 == Infinity and 1/-0 == -Infinity.


Don't know why you're getting downvoted, but thanks for the clarification. Looks like I misremembered. Apparently it also has +0 and -0...


Opps, I forgot the negative side. Thanks.


> I think 1/0 = Infinity+ is a satisfying expression. Its a clear concept which can be visualized in a simple graph.

Not for integers.


Since the "1/0 = 0" is only defined for integer division then I'd argue that the chosen constant should be MAX_INT. (or MIN_INT in case of signe values and negative numerator)


That's a wonderful graphing website from the look of it!


Limits aren't the only thing broken by division by zero, though. For integers it's mainly algebraic stuff.


I would argue 1/0 is infinity. Not with mathematics, just logically. By using the wording "how many times does 0 go into 1?"

You bring up 0/0. But 0/<anything> is already defined, it's 0. So 0/infinity = 0

Programmatically, I think I've always wanted X/0 to be 0. For example: progress bars, currency, or damage in a video game. It wouldn't be very helpful to have infinity be an answer there.


Your argument is flawed because your zero is positively biased.

Why infinity and why not negative infinity.


I encourage you to look at the graphs. Yes, the value along the y-axis is 0, and the value along the x-axis approaches positive and negative infinity. It's path-dependent. Take a look.


In float yes. In int there's no Inf. There's only MAX_INT or 0 or throw an exception.

Normal langs throw. That what the checked int div will do. But the unchecked int div has no return type which allows throwing an error. And pony is strongly typed, unlike most other langs. That's why it can guarantee much more safeties and performance than all other languages.


> But is Pony doing something unsound? Absolutely not. It is totally fine to define 1/0 = 0. Nothing breaks

It actually does break something, the symmetry between division and multiplication and the many pieces of code that assume that (x / y) * y equals x. Here is a naive and non practical example, but it is not impossible to find a real world example where this simplified code manifests itself accidentally or by design

  function do_something_with_x(x, y) {
    let ratio = x / y;
    if (do_something_with_ratio(ratio)) {
      return x * y;
    }
    else return null;
  }
Again, this is a naive example but one that could manifest itself with a very imprecise result when y = 0


I think OP means that nothing breaks mathematically. It is not inconsistent and not false, so you can work with it. The only issue is to deal specially with the case of division by zero, which you have to do anyways. Code that assumes that (x/y) * y = x is wrong if you don't check for y = 0, independently of what you define x/0 to be.


You do realize that division is the inverse operation of multiplication, right? Like subtraction is the inverse of addition. By defining addition we define subtraction. By defining multiplication we define division. This is where the author fails. Division is multiplication of a fractional value. This is VERY important.

And just because it is mathematically a field does not mean it is particularly the right choice. A field is not strictly definitely by addition and multiplication, it is definitely by two operators where one is an abelian group (addition in our case) and the other forms and abelian group over the non identity term of the first (eg multiplication is an abelian group over non zero terms). The complex numbers create a field, which is really how we do addition and multiplication in 2D space. 3D space you can't form one, so you have a ring. Which is why quaternions are so important, because they form a field.

But the author is wrong because they think division is a different operation than multiplication. It's just a short hand.


In a field, division by zero is not the inverse operation of anything.


The usual definition of division is the multiplicative inverse, yes. But that does not mean that, as an intellectual curiosity (which is how I take this post) you can't define "division" as having other value. Sometimes that's useful, sometimes it's not. Most of the time it is fun to see how things break and don't break. For example, the infamous 1+2+3+... = -1/12 sum[1]: for most purposes it's a divergent sum but you can find ways to assign a finite value and make that useful.

1: https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B...


The OP says

> The field definition does not include division, nor do our definitions of addition or multiplication.

This is clearly a misunderstanding of not realizing that the definition of division is a symbolic shorthand for inverse multiplication.

Sure, you're right that we can define it another way. But I don't know what we would call an object with three operators. (Someone more knowledgeable in field theory please let me know, it's hard to Google) in the normal field we work with (standard addition and multiplication) we only have two unique operators. Everything else is a shorthand. In fact, even multiplication is a shorthand.

>For example, the infamous 1+2+3... = -1/12

Ramanujan Summation isn't really standard addition though. It is a trick. Because if you just did the addition like normal you would never end up at -1/12, you end up at infinite (aleph 0, a countable infinity). But the Ramanujan Summation is still useful. It just isn't "breaking math" in the way I think you think it is.

But I encourage you to try to break math. It's a lot of fun. But there's a lot to learn and unfortunately it's confusing. But that's the challenge :)


The fact that it's a shorthand means that it's not in the definition, just a convention. In fact, nowhere in my studies I saw anyone using 'division' when working explicitly with fields in algebra, it's always multiplicative inverse.

By the way, you cannot do the addition like normal on that series, you don't end up at anything. You can say it diverges and its limit is +infinity (not aleph0, aleph0 is used for cardinality of sets so I think no one would use it as the result of a divergent series). When I said it 'broke math' what I meant was that, in the same way that OP did, it is a way to assign values to things that usually don't have one. I know it does not actually break math.


That's not necessarily the definition of division, there are many commonly used definitions in different contracts that are different.


Bravo, that's a great explanation. I think you've done a better job here than my comments elsewhere in this thread. Nice tie in to dimensionality, the complexes and quaternions too.

There are legitimate reasons to advocate for defining division by 0 in the context of a programming language. But the attempt at mathematical rigor really distracts from those reasons. It feels like the author made a decision and reached for some odd mathematics that seems to support it but doesn't. Floating numbers don't even form a field. If we define division by 0 in a field it stops being a field and becomes a ring or some sort of weird finite field with a single element...


> Division is multiplication of a fractional value

Thus, division by 0 is multiplying by (1/0). Does such a fraction exist? It can go along two potentially different paths depending on the limit we take.

Alternatively, there is information lost when you multiply something by 0: a x 0 = 0, b x 0 = 0. When you perform an inverse by dividing, will you get back a or b (or any number)? Thus, it is not invertible at least at 0.

Disclaimer: Not a mathematician


>Alternatively, there is information lost when you multiply something by 0: a x 0 = 0, b x 0 = 0. When you perform an inverse by dividing, will you get back a or b (or any number)? Thus, it is not invertible at least at 0.

>>A field is not strictly definitely by addition and multiplication, it is definitely by two operators where one is an abelian group (addition in our case) and the other forms and abelian group over the non identity term of the first (eg multiplication is an abelian group over non zero terms).


It is inconsistent. If 1/0 = 0, then 1 = 0*0.


This should not be getting downvoted. The author of the article made an incorrect refutation of this point under the "Objections" heading. Division by 0 in fields is undefined precisely because it leads to contradictions like this. The full statement of a proof that 1/0 = 0 includes the temporary assumption that you have defined division by 0 such that it is no longer undefined.

You can't logically refute that proof by saying, "well no, you have yet to define division by 0 according to the field axioms, so you can't use that division as part of your proof." That's the point! The proof does not demonstrate that division by 0 results in 1, it demonstrates that you cannot define division by 0 while maintaining the algebraic structure of a field.

If the author wants to talk about defining division by 0 in wheels or something esoteric they're more than welcome to. But among fields, and among the real numbers, it's not possible. This whole exercise of trying to refute what has been commonly accepted for over a century is frankly ridiculous for trying to justify undefined behavior in a programming language.


But the point is you keep the structure of a field for everything that already has it, right? No structure is taken away. All statements valid on any subset of the reals are still valid without modification under this new definition. The algebraic structure is still a field obeying the same axioms. There are just some new valid statements too.


No that's not correct, and this is why I think the author's entire point is pretty inane. If you're going to start off your argument with the full formalism of field axioms and consequent theorems, you need to be prepared to split hairs about whether or not your definitions constitute a field.

Mathematics is thoroughly pedantic about definitions for a reason. If those formalisms don't matter because what you've done is "close enough", then skip the song and dance about field definitions and stop trying to use it to justify the behavior of an undefined operation in a programming language. Just say you're defining 1/0 to be equal to whatever you want because the world doesn't break down. It actually detracts the author's point to so confidently (and incorrectly) refute something that is robustly proved in the first few weeks of an undergraduate analysis course. Why is this even in a blog post about a programming language?!

This is essentially the same point as the extended real (or complex) number systems. The sets of all real and complex numbers (respectively) form fields under the axioms of addition and multiplication. But you can define division by 0 and division by infinity in a way that works with familiar arithmetic (I explained how to do this in another comment barely two weeks ago [1]). But the key point here is that in doing this you sacrifice the uniqueness of real numbers.

The author tries to refute this observation by claiming the proof uses an undefined division operation, but that's a red herring. The real assertion is that you cannot define division as an inverse operation from multiplication to be inclusive of division by the unique unit (i.e. 0, in the real field) unless you are willing to state that every number is equal to every other number in the entire field. And you can do that, but it trivially follows that you no longer have a field without a nonzero element.

So really the actual proof is that division by 0 is undefined for any field with at least one nonzero element.

__________________________________

1. https://news.ycombinator.com/item?id=17599087#17601806


Is there a proof or something elsewhere you can link to? To be honest I can't really tell the point you're trying to make.


I can give you a simple proof by contradiction.

1. Let F be a field containing an element x =/= 0.

2. Suppose we have defined division by zero in F such that, for all x in F, there exists an element y = x/0 (i.e. F adheres to the field axiom of multiplicative closure). Note that at this point it does not matter how we have defined division by 0, we will just generously continue and assume you've done it in a way that maintains the other field axioms.

3. Since y = x/0, it follows that the product of y and 0 is equal to x, because division is the inverse of multiplication. By the field axioms, division does not exist if there is no multiplicative inverse with which to multiply.

4. But by the field axioms this implies that x = 0, which contradicts our initial assumption. Likewise, since we can repeat this procedure with any element x in F, this demonstrates that there exists no nonzero element x in F, and in fact F = {0}.

The failure in the article's refutation is that this proof is designed to permit you to assume you have suitably defined division by zero, then proceed to demonstrate without any loss of generality that you could not possibly have unless 1) F is not a field, or 2) F contains only 0. The fundamental algebraic property you sacrifice by defining division by zero is uniqueness, and uniqueness is a hard requirement in fields with nonzero elements.


There is a mistake in your step 3, as it relies on an informal and imprecise "division is the inverse of multiplication". If you were to write that formally, you'd get `∀ x ≠ 0 . x(1/x) = 1`. This holds unchanged even if you define division by zero.

Even if you could come up with another formalization that does cause a problem, e.g. `∀ x ∈ dom(1/t) . x(1/x) = 1` (and I would say that this is the only formalization that causes an issue, and it requires the use of a language with a dom operator, something that is absolutely not required for theories of fields), it won't matter because the question is not whether one could come up with a formalization that leads to contradiction, but whether there are reasonable formalizations of fields where this does not happen, and there are (in fact, most of them satisfy this, as they do not rely on a dom operator).

In addition, it is not true that "by the field axioms, division does not exist if there is no multiplicative inverse with which to multiply." It's just that the field axioms do not define what the meaning of division is in that case. Defining it, however, does not lead to contradiction with the axioms, at least not a contradiction you've point out. In fact, most common languages of mathematics cannot even explicitly express the statement "x is not in the domain of f." All they can do is define f(x) for values of x in the domain, and not define f(x) for values outside it. The "exist" in your statement does not refer to ordinary mathematical existence (usually formally expressed with the existential quantifier) but to an informal notion of definedness (discussed by Feferman) that has no formal counterpart in most formal systems, because it is very rarely needed; it is certainly not needed to state the field axioms.


This is nonsense. If F is a field with elements x, y, then the quotient x/y is equivalent to the product x(1/y). If 0 has no multiplicative inverse, there is no division by 0. The two concepts are one and the same, just as is the case for subtraction and additive inverses. The problem is not that field theory doesn't tell you "what happens" when you divide by 0 - you haven't defined that, there is no what happens because nothing happens at all.

You can't engage with the problem because it only exists as a syntactical annoyance. You seem to acknowledge this, but then continue to argue when I explicitly tell you I am in agreement on that point. Then you proceed to argue the theoretical basis all over again.

I'm not going to continue arguing this with you. You're presently the only one in this thread who isn't following and I've tried to direct you to other resources. You've alternated between saying those proofs are either incorrect outright or not applicable because they don't have relevance for programming. If you actually believe division by 0 is possible in fields you have an immediately publishable math paper waiting for you to submit it.

Otherwise we're just talking past each other because my whole point here has been that the author's discussion of fields is irrelevant for programming language theory in the first place.


> If 0 has no multiplicative inverse, there is no division by 0. The two concepts are one and the same.

But formalizing the statement "there is no division" poses challenges. If you agree that a formalization of fields that is acceptable to you exists, please write a formal axiom/theorem in that theory which would break if we were to extend the definition of division.

> just as is the case for subtraction and additive inverses.

No, because subtraction is not partial, and therefore poses no challenge for formalization.

> there is no what happens because nothing happens at all.

This is fine when working informally, but doesn't work for formalizations, and formalizations of fields do exist.

> You can't engage with the problem because it only exists as a syntactical annoyance.

But that is the heart of the problem. Either you say one cannot formalize fields at all, or you must admit that some acceptable formalizations do not break. No one is claiming that one should be able to divide by zero in informal mathematics.

> I'm not going to continue arguing this with you.

That's fine, but settling this argument is very easy: write a theorem of fields in a formal language that would break if we extend division, one that cannot be equivalently written in an acceptable formalization in a form that does not break. This can literally be done in one line. If you cannot write such a theorem, then there really is no point arguing, because you cannot provide a counter-argument. Repeating the same informal theorems over and over is indeed pointless.

> You've alternated between saying those proofs are either incorrect outright or not applicable because they don't have relevance for programming.

I've not alternated on anything. Your proofs are incorrect in formal systems that you yourself would find acceptable.

> and I've tried to direct you to other resources

Resources about informal mathematics, on which everyone agrees. That the informal concept of division cannot be extended to 0 is obvious. The only question is whether a formal definition of division would necessarily break formal field theorems if extended to division by zero. You seem to claim that's the case, yet have not stated a single formal theorem of the kind.

> If you actually believe division by 0 is possible in fields you have an immediately publishable math paper waiting for you to submit it.

I would if only that result (doesn't merit a paper) had not already been published by at least Paulson and Avigad, two of the world's best known logicians. The formal field theory in the Lean proof assistant explicitly allows division by zero. That division coincides with the informal division everywhere but at zero, and the extension introduces no inconsistencies to the theory.

> the author's discussion of fields is irrelevant for programming language theory in the first place.

It's not about programming, but about any formalization (which includes, but is not limited to programming).

Anyway, see my comment https://news.ycombinator.com/item?id=17738558, which distills the debate into a concise, precise form.


>Since y = x/0, it follows that the product of y and 0 is equal to x, because division is the inverse of multiplication.

Can you explain how this follows? I thought division was only the inverse of multiplication for all nonzero denominators, which would mean we can't use that definition for deduction in x/0.

It might hinge on your next sentence:

>By the field axioms, division does not exist if there is no multiplicative inverse with which to multiply.

but I don't understand why that's necessarily true. I don't understand how the field axioms require division by x to require the existence of a multiplicative inverse of x when x is zero. Sorry to take a bunch of your Friday, but I'm very curious now. Explanation much appreciated.

-------

edit:

Come to think of it, couldn't I define x/y as cotton candy for all x,y in field F and still satisfy the field axioms? They just don't refer to division.

Any connection between x/y and y's multiplicative inverse is just a nice convention. That convention states that x/y = x * mult_inv(y) when y != 0, but nothing else. That definition has nothing to do with the field axioms and changing it doesn't require that I change anything about multiplicative inverses. That means I don't touch the field axioms and my field is still a field.


Your edit is starting to get it. If by the argument of the article x/y = cotton candy for all x,y, then probably the argument of the article isn't good. And the reason is precisely that division in a field is taken to be nothing but a notational shorthand for multiplication by the multiplicative inverse.


> Come to think of it, couldn't I define x/y as cotton candy

Yes. That's the thesis of the article. Make an arbitrary choice for 1 / 0 = ?, and if it helps you, use it. It's mathematically, rigorously fine.


This is explicitly talked about in OP about 2/3 of the way down the page. "If 1/0 = 0, then 1 = 0 * 0" is untrue. Your argument is "1/0 = 0, so multiply both sides by zero: 1/0 * 0 = 0 * 0, and then take 1/0 * 0 = 1 * 0/0." That last step isn't the case for reasons covered in the article we're ostensibly discussing.


Right. So the multiplicative inverse property _breaks_! He just points out that it breaks, and thus you need to use a more complicated property instead. That doesn't mean that the property doesn't break.


He's not saying that it breaks; quite the opposite, he repeatedly states that 0⁻ doesn't exist. He's just defining division in a specific way.

The MI property states that every element except 0 has a multiplicative inverse. He's defining division via two cases: If b≠0, then a/b = a*b⁻ (multiplicative inverse). If b=0, then a/b=0. This definition does not imply that 0⁻ exists, so there's no violation of MI.


The problem this and the other replies miss is that the standard definition of division is multiplication by the inverse. The entire argument rests on a notational slight of hand. The property that held before -- that _when defined_ division has the inverse property -- no longer holds. Thus many equational identities that otherwise would hold do not hold.


> The problem this and the other replies miss is that the standard definition of division is multiplication by the inverse.

Try to state this definition formally. The statement: ∀ x,y . x/y = xy⁻¹ is not a theorem of fields or a definition of division. However, ∀ x, y . y ≠ 0 ⇒ x/y = xy⁻¹ is, but is completely unaffected by defining division at 0. Those who think they see a problem rely on informal and imprecise definitions. Could you formally state a theorem that is affected? That would help you get around issues that are merely artifacts of imprecision.

But let's entertain you, and state that what we really mean by the informal and vague statement, "division is the inverse of multiplication," could be stated formally as:

    ∀ x ∈ dom(1/t). x(1/x) = 1
You are absolutely correct that this equational theorem is broken by extending the domain of division. However, there is absolutely no way to say that the formalization of this theorem isn't actually

    ∀ x ≠ 0 . x(1/x) = 1
because the two are equivalent. You cannot then claim that something is necessarily broken if you choose to pick a formalization that is indeed broken, while an equivalent formalization exists, that is not broken (not to mention that the formalization that is broken requires a strictly richer language). All that means is that your formalization in this case is brittle, not that laws are broken.


I have to disagree -- this isn't sleight of hand. The standard definition isn't being violated here, because standard division isn't a total function. The denominator's domain in Hillel's function is a proper superset of the standard domain: when restricted to the standard domain, the two functions are precisely equivalent. Therefore, every standard identity still holds under Hillel.

The hole that he is filling here isn't one that he bored into the standard definition, but a hole that the standard definition already admitted. If something is explicitly undefined, there's nothing mathematically wrong with defining it, as long as the definition doesn't lead to inconsistency.


> If something is explicitly undefined, there's nothing mathematically wrong with defining it, as long as the definition doesn't lead to inconsistency.

The definition does lead to inconsistency...you can't look at the field axioms, observe that 0 has no multiplicative inverse, then proceed to define a special, one-off division rule that doesn't involve multiplicative inverses for that one element. Either your division rule is pathological and breaks a fundamental field property or you've introduced a division rule which is just a syntactical sugar, not a real operation (in the latter case you've introduced confusing notation, not a new division function). Why do you think mathematicians explicitly state that the real field with the augmentation of positive and negative infinity (which allow division by 0) is not a field?

I don't understand why there is so much resistance to this idea in this thread, but the simple fact remains that if you define division by an additive identity (0) in any way, the field containing that unit ceases to be a field. This is because all elements cease to be unique. You can quickly prove that every element is equal to every other element, including (critically) the additive and multiplicative identity elements. Fields are defined by closure under the operations of addition and multiplication, and that closure requires uniqueness of their respective identities. Upend that and your entire field structure breaks down, because all you're left with is a field with a single element 0.

Stating that you've defined division by 0 using a one-off case that permits all other field identities to remain consistent is like saying you've turned the complex field into an ordered field using lexicographic ordering. You haven't, because i admits no ordering, much like 0 admits no multiplicative inverse.

Onlookers reading these comments probably think those of us harping on this point are anal pedants with a mathematical stick up our ass. But this thread is increasingly illustrating my central point, which is that the author shouldn't have tried to justify numerical operation definitions in a programming language using field axioms of all things.


> Onlookers reading these comments probably think those of us harping on this point are anal pedants with a mathematical stick up our ass. But this thread is increasingly illustrating my central point, which is that the author shouldn't have tried to justify numerical operation definitions in a programming language using field axioms of all things.

You can't use your own stubbornness to justify itself.

I'm waiting for you to justify your claim that this extension to division breaks any of the field axioms. pron even made you a nice list of them.

Just name one equation/theorem that the new axiom would break.

I'm completely open to being convinced! But so far you've only given arguments about giving a multiplicative inverse to zero. Everyone agrees on that. It's the wrong argument.


Because a divisor cannot exist unless it is a multiplicative inverse. Therefore 0 is not a divisor.

This is getting to be Kafkaesque...it breaks the field axioms themselves. How many different explanations and external resources do I need to provide in this thread for you to be convinced that this is not a controversial point in modern mathematics? I just explained it in the comment you responded to.

You have exactly two options here.

If you define x/0, that definition must interact with the rest of the definitions and elements of the field. To maintain multiplicative closure (a field axiom!) there must be a unique y element equal to x/0. So tell me how you will define x/0 such that x is not equal to the product of 0 and y. Regardless of what you think the author has shown, the burden of proof is not on me at this point to show that you can't do it, because it follows directly from the field axioms. Trying to impose a one-off bizarro divisor function defined only on {0} is not only mathematically inelegant, it immediately eliminates the uniqueness of all field elements. Therefore your "field" just becomes {0}, and since it lacks a multiplicative identity it ceases to be a field. There is your contradiction. Why don't you tell me how you're going to prove any equation defined over a field that relies on the uniqueness or cancellation properties of fields?

On the other hand, let's say you tell me you want define x/0 so that the definition doesn't interact with any of the field definitions or elements. Then you haven't actually introduced any new operation or definition, you've just designed a notation that looks a lot like division but has no mathematical bearing on the field itself (i.e. absolutely nothing changes, including for 0). That's not a divisor function, it's just a confusing shorthand. You can't just add another axiom to a field and call it a field.

If you believe I'm stubborn, that's fine. I might be a poor teacher! There are ample resources online which will patiently explain this concept in mind numbing detail. It boggles my mind that there are people in this thread still fighting an idea in earnest which has been settled for over a century.


> Because a divisor cannot exist unless it is a multiplicative inverse. Therefore 0 is not a divisor.

So there are two separate issues here. One is whether we can extend the definition of the "/" operator, and the other is whether we call it "division".

I'm not interested in what we call it. I'm interested in the claim that extending "/" will break the field.

The dichotomy you're talking about is wrong. The two options are not "multiplicative inverse" and "does not interact with anything". "1/0 = 0" interacts with plenty! If I make a system where it's an axiom, I can calculate things like "1/0 + 5" or "sqrt(1/0)" or "7/0 + x = 7". I can't use it to cancel out a 0, but I can do a lot with it.

> It boggles my mind that there are people in this thread still fighting an idea in earnest which has been settled for over a century.

Remember, the question is not "should this be an axiom in 'normal' math?", the question is "does this actually conflict with the axioms of a field?"

> You can't just add another axiom to a field and call it a field.

Yes you can. There is an entire hierarchy of algebraic structures. Adding non-conflicting axioms to an X does not make it stop being an X.


> you can't look at the field axioms, observe that 0 has no multiplicative inverse, then proceed to define a special, one-off division rule that doesn't involve multiplicative inverses for that one element

Why not? What mathematical axiom does this break?

> Either your division rule is pathological and breaks a fundamental field property

It doesn't, or you could show one here: https://news.ycombinator.com/item?id=17738558

> or you've introduced a division rule which is just a syntactical sugar, not a real operation

Is this math? We define symbols in formal math using axioms. I am not aware of a distinction between "syntactical sugar" and a "real operation."

> Why do you think mathematicians explicitly state that the real field with the augmentation of positive and negative infinity (which allow division by 0) is not a field?

That's simple: because unlike the addition of the axiom ∀x . x/0 = 0, adding positive and/or negative infinity does violate axioms 2, 7 and possibly 11 (depending on the precise axioms introducing the infinities).

> I don't understand why there is so much resistance to this idea in this thread, but the simple fact remains that if you define division by an additive identity (0) in any way, the field containing that unit ceases to be a field.

Because a field is defined by the axioms I've given here (https://news.ycombinator.com/item?id=17738558) and adding division by zero does not violate any of them. If you could show what's violated, as I have for the case of adding infinities and as done in actual math rather than handwaving about it, I assume there would be less resistance. I don't understand your resistance to showing which of the field axioms is violated.

> You can quickly prove that every element is equal to every other element, including (critically) the additive and multiplicative identity elements.

So it should be easy to prove using the theory I provided, which is the common formalization of fields. Don't handwave: write proofs, and to make sure that the proofs aren't based on some vagueness of definitions, write them (at least the results of each step) formally. The formalization is so simple and straightforward that this shouldn't be a problem.

> Stating that you've defined division by 0 using a one-off case that permits all other field identities to remain consistent is like saying you've turned the complex field into an ordered field using lexicographic ordering. You haven't, because i admits no ordering, much like 0 admits no multiplicative inverse.

Stop handwaving. Show which axioms are violated.


Consider the following two functions, that we can define for any field F:

• Our first function f(x, y) is defined for all x in F, and all nonzero y in F. That is, the domain of this function f is (F × F\{0}). The definition of this function f is:

f(x, y) = x times the multiplicative inverse of y

(Note that in the definition of f(x,y) we could say "if y≠0", but whether we say it or not the meaning is the same, because the domain of f already requires that y≠0.)

• Our second function g(x, y) is defined for all x in F, and all y in F. That is, the domain of this function g is (F × F). To define this function g, we pick a constant C in F, say C=0 or C=1 (or any C∈F, e.g. C=17 if 17 is an element of our field). And having picked C, the definition of this function g is:

• g(x, y) = x times the multiplicative inverse of y, if y ≠ 0, and C otherwise [that is, g(x, 0) = C].

Note that this function is defined for all (x, y) in F×F, and when y≠0 it agrees with f, i.e. g(x,y) = f(x,y) when y≠0.

Would you agree that both of these are functions, on different domains?

Next, we have the notation x/y. To assign meaning to this notation (and to the word “division”), there are two conventions we could adopt:

• Convention 1: When we say "x/y", we will mean f(x,y) (as defined above) — that is, x * y^{-1}.

• Convention 2: When we say "x/y", we will mean g(x,y) (as defined above).

The point of the post is that we can well adopt Convention 2: with such a definition of "division", all the properties that were true of f (on the domain F×F\{0}) continue to be true of g, except that g is defined on a wider domain.

----------------------------------------------

Now, maybe Convention 2 offends you. Maybe you think there is something very sacred about Convention 1. In all your posts in this thread, you seem to be insisting that "division" or "/" necessarily have to mean Convention 1, and the provided justification for preferring Convention 1 seems circular to me — here are some of your relevant comments, with my comments in [square brackets]:

> Since there is no multiplicative inverse of 0, division by 0 is undefined behavior [This seems to be saying: Because of Convention 1, we cannot adopt Convention 2.]

> Since y = x/0, it follows that the product of y and 0 is equal to x, because division is the inverse of multiplication [Here, your reasoning is "because we adopt Convention 1"]

> in algebraic fields division by x is equivalent to multiplication by 1/x. This is precisely why you cannot have a field that admits division by 0: because 0 has no multiplicative inverse. [Again, you're stating that Convention 1 has to hold, by fiat.]

> If F is a field with elements x, y, then the quotient x/y is equivalent to the product x(1/y). If 0 has no multiplicative inverse, there is no division by 0. The two concepts are one and the same [This is just stating repeatedly that we have to adopt Convention 1.]

> A multiplicative inverse is a division. [This is merely insisting that Convention 1 has to be adopted, not Convention 2]

> divisor cannot exist unless it is a multiplicative inverse [Again, stating Convention 1.]

> you can't look at the field axioms, observe that 0 has no multiplicative inverse, then proceed to define a special, one-off division rule that doesn't involve multiplicative inverses for that one element. [Why not?] ...you've introduced a division rule which is just a syntactical sugar [yes the same is true of Convention 1; what's the problem?]

All these comments, which emphatically insist on Convention 1, seem to ignore the point of the article, which is that Convention 2 has no more mathematical problems than Convention 1, because the function g is no less a “valid” function than the function f.

In mathematics when we use words like “obvious” or insist that something is true because it just has to be true, that's usually a hint that we may need to step back and consider whether what we're saying is really mathematically justified. What we have here is a case of multiple valid definitions that we can adopt, and there's no mathematical reason for not adopting one over the other. (There's a non-mathematical reason, namely “it breaks convention”, but the entire point is that we can break this convention.)


I follow everything you're saying. As I have said many times already, I have no problem with convention 2. But don't use field theory to justify convention 2, because it's mathematically incoherent and unnecessary. I broadly agree - there is no reason to be involving field theory in programming language design.

I don't take issue with division by 0 - you can do that in mathematics just fine! I take issue with defining that division and calling the consequent system a field when it's not a field, and acting as though everyone else is wrong. The author invited this criticism when they loaded in the full formalism of field theory without needing to.

If the author had just stated they wanted to define division by zero that wouldn't be a problem. I have no idea why they felt the need to pull in abstract mathematics. I'm not disagreeing with their point, I'm taking issue with the strange and incorrect way they defended it.

Note that in my top level comment I specifically said, "Mathematics does not give us truths, it gives us consequences." I will happily agree with you that there is usefulness and coherence in a definition of 0. There is no canonical truth about the world regarding that matter. But a direct consequence of defining any division by 0 is that you cease to have an algebraic field.

Therefore, using field theory to defend a system which defines division by 0 doesn't make sense. It's not that the system is "wrong" for some meaning of wrongness. It's that you shouldn't be trying to pigeonhole field theory to make it work, because you don't need to.


Glad you follow! But I'm not sure we agree yet, as there's still something puzzling when you say:

> But don't use field theory to justify convention 2, because it's mathematically incoherent

> defining that division and calling the consequent system a field when it's not a field

> a direct consequence of defining any division by 0 is that you cease to have an algebraic field

If you go back to my comment (the one you're replying to), both the functions f and g assume a field F, and they are well-defined functions on F×F\{0} and on F×F respectively. (Do you agree?) For example, F may be the field of real numbers. Forget about the word “division” for a moment: do you think there is something about the function g, that makes F not a field?

To put it differently: I agree with you that it is a direct consequence of defining a multiplicative inverse of 0 that you cease to have an algebraic field. But the only way this statement carries over when we use the word “division”, is if we already adopt Convention 1 (that “division” means the same as “multiplicative inverse”).

Again, I think you are implicitly adopting Convention 1: you're saying something like “if we adopt Convention 2, then x/y means the function g and includes the case when y=0, but [something about multiplicative inverses, implicitly invoking Convention 1], therefore there's a problem". But there's no problem!

It is not a direct consequence of defining the function g(x,y) that something ceases to be a field: it is a consequence only if you also insist on Convention 1, namely if you try to assign a multiplicative inverse to 0 (which everyone here agrees is impossible).

Let me emphasize: whether we adopt Convention 1 or Convention 2, there is no problem; we still have the same field F.


Look at it this way...

Standard definition of division function, d:

d(x, y) = x * y⁻, for all x and y EXCEPT 0

Author's modified, piecewise (https://en.wikipedia.org/wiki/Piecewise) definition:

d(x, y) = x * y⁻, for all x and y EXCEPT 0

d(x, y) = 0, for y = 0

He's just adding 0 to the domain of d(x, y) to extend the definition, and deliberately not using xy⁻ for that particular element of the domain. No inverse needed.


I know what he's doing. The problem is when you make it a different function (even by just extending it) then you change its equational properties. So equational properties that held over the whole domain of the function no longer hold over the extended domain. This is repaired by modifying the equational properties. But the modified equational properties mean that you now have a different system than before. So the whole thing is just playing around with words.


> So the whole thing is just playing around with words.

Er... that's what mathematics is. It's a word game - we build systems from arbitrary rules and then explore the results.

Look through https://www.mathgoodies.com/articles/numbers for a bunch of uncommonly-defined numbers.


You missed the part where he defines division as a/c = a * inverse(c) for all c != 0


Multiplicative inverse is already broken. x / 0 * 0 is either 0 or ⊥ (aka exception). It wasn't true to begin with. I happen to like the exception/partial function version, so I'm playing devil's advocate, but it really was already broken when we're talking about zero.


Wait, if 1 / 0 = infinity, then infinity * 0 = 1

This seems just as bizarre, since zero times anything shouldn't become 1, no matter how big or how many times you do it.


According to the standard definition of division, 1/0 does not equal infinity. It doesn't equal anything. It is undefined.


Yes, that's correct. The fact that 1/0 shouldn't be defined as infinity isn't an argument that it should be defined as 0.


There are many contexts where defining either 1/0 = -1/0 = ∞ or 1/0 = +∞ and –1/0 = –∞ is better than the alternatives, especially when working in an approximate number system like floating point.

In geometric modeling kinds of applications, I would say that these definitions are typically desirable, with 1/0 = undefined only better in unusual cases. As a simple example, it is typically much more useful for the “tangent” of a right angle to be defined as ∞ than left undefined.

But anyhow, there are no “facts” involved here. Only different choices of mathematical models, which can be more or less convenient depending on context / application.


Yes, exactly, I have no idea why you're being downvoted.


Look up the dirac delta function. It's a spike thats infinitely tall and infinitely narrow, with an area of 1. It's established now as a very useful tool in EE. But many people fought it tooth and nail because of this logic.


> This seems just as bizarre, since zero times anything shouldn't become 1, no matter how big or how many times you do it.

It isn't bizarre, because there's an equal and opposite argument that anything times infinity is infinity, no matter how small the thing you multiply by.

If you actually do infinity * 0 you get NaN since there's no way to determine (without more information) whether the result should be 0, infinity, or anything in between.


1/0 = undef (cast to 0)

1 != 0*0

I don't see the inconsistency. Abstract math vs practical application.


You’re doing something different than real number arithmetic. Saying 1/0 = x, and then treating x like a real number is inconsistent. But just saying “we are going to augment the real numbers with an element x that is not a real number, and then define some properties of x and prove things about it” is not.


> You’re doing something different than real number arithmetic

Computer languages execute on rules that are not utilizing real number arithmetic. I didn't want to mention it, but there's these things called floats...

Edit: Pony took out the "normal" version of division by zero and suggest to write a wrapper to check beforehand.


Where did I say anything about computers? I’m referring to real numbers.


The topic is a computer language. I'm referring to utility.


Floating point is only useful to the extent that it approximates real arithmetic.


In pony, floating point behaves 1.0/0.0 the way you might expect.


It's not inconsistent. See this article [1] for an explanation. It's the first thing covered in the "Objections" section.

[1] https://www.hillelwayne.com/post/divide-by-zero/


So according to you, 0*0 = 0, thus 0/0 = 0?


If division by zero is not undefined, yes. But how does this disagree with the assertion that it's mathematically inconsistent (as opposed to programmatically inconsistent)? If any case exists that leads to a logically false conclusion, then it's not logically consistent.


Nope. If 0/0 = 0, then x * 0/0 = x * 0, so 1/0 = 1, implying 1 = 0.


You didn't read the article, did you?


> the many pieces of code that assume that (x / y) * y equals x

The same issue if division by zero throws an exception. This is simply a more practical approach that dispenses with the exception handling (ie becomes a less irregular test case). Edit: Not buggy behavior, when expected.


I would say an exception is much more convenient. Buggy code that silently continues to run is very hard to debug!


I used to think the same way: let's throw an exception on divide by zero, and forget NaN like a bad dream! But then someone explained to me that it's common to feed a billion numbers into a long calculation, then look at the results in the morning and find them okay, apart from a few NaNs. If each NaN led to an exception, you'd come back in the morning and find that your program has stopped halfway through. So there's a certain pragmatism to having numeric code silently continue by default.


Typically, divide by zero throws for integers because they can’t express NaN, which can instead be returned for floating point. In any case, integers can’t express special values, so you get exceptions instead. And this is actually defined at the processor level (for x86 among others), the trap is free (well, a sunk cost), why not take it?

Zero is not a very good NaN.


Yeah, agreed. The thing with 1/0=0 is bizarre, I guess my comment was more about why NaNs are okay.


What of a language without exceptions? A correct program can not assume that (x / y) * y = x because such an equality does not hold in general. Therefore the program must do something right in the case that y is zero before it tries dividing x by y to be correct. It shouldn’t really matter what x / 0 is because if a program relies on its value then it is probably wrong.


It's not convenient at all in a language like Coq.


Can you elaborate?


It's common to think of an exception as the program stopping and never being started again. In that sense, the invariant is still maintained, because the program never sees it violated. This has non-trivial consequences - for example, Rust lets exceptions and endless loops produce any value at all, even though they obviously can't actually do that.


Assuming an exception is equivalent to any non-exceptional value doesn't break anything. See "Fast and Loose Reasoning is Morally Correct".


Holy moly that is not at all what that paper says! It specifically argues that certain equational properties of a given total language continue to hold in the total fragment of a given partial language. It is an embedding theorem combined with lifting properties, not a license to perform _any_ reasoning at all regarding the non-total portion of the language it considers!


> (x / y) * y equals x

Even without Pony's assumption, that's only true when y != 0.


Yeah, (x/y)*y = x is true whenever it's a meaningful statement. x/y has no meaning in standard mathematics when y=0. 1/0 isn't infinity or anything else in standard math, it's literally un-grammatical nonsense.


It should be true for every y that is allowed in the denominator.

That is how fractions are handled in higher math. (Either called "localization" or "ring of fractions").


It's true up to NaNs, which we can treat as bottoms.


It's talking about integer arithmetic, so it's already not true that (x / y) * y == x. For example, (5 / 2) * 2 == 4.


> …any pieces of code that assume that (x / y) * y equals x…

What? If you’re using integers or floating-point numbers, that has never been true in general. Consider x=1, y=49 in the realm of IEEE doubles. Addition isn’t even associative, consider 1e-50 + 1 - 1.


Also just defining division to be whatever you want in the moment is not actually pragmatic, it's stupid and trivial. There's a reason we define a symplectic manifold the way we do. There's not a reason to say `1/0 = 0`, aside from the fact that the Pony designers didn't care to find a good solution to the problem.


What if number types were Optionals after any operation that could result in any kind of unusual number (sqrt(-1), Infinity, NaN)? Or maybe after every operation, since any operation could overflow the type.

Do any languages do that? Seems more consistent (if way more hassle) than giving a mathematically false result out of pragmatism. At least in a strictly typed language.


You know what would happen. Math is ubiquitous in lots of code, so syntactic shortcuts would be introduced. Thus, we wouldn't use `a >>= (\x -> 2x) >>= (\x -> x^2)`, but would soon introduce syntactic sugar. For example, we could re-define `[asterisk]` so that it had type `Maybe Number -> Maybe Number -> Maybe Number`. You'd need a `return` (or `Just`) at the beginning, but soon that would be elided too, so that numeric literals would be auto-promoted to `Maybe Number`s. Then the language would add silent mechanisms to allow `Maybe Number`s to be displayed easily. You should check if your `Maybe Number`s are `Just Number`s before letting them out to play, but, if the language doesn't force you, then you know you won't. And then we're right back in the current situation.


Zig defines checked versions of mathematical operators in its std lib [0], which return an Error Union. It's like an Optional (which Zig also has), but with an error instead of null. Your code can choose to handle this error however it wants.

[0] https://ziglang.org/documentation/master/#Standard-Library-M...


Isn't that essentially what floats already are? A sum type of numbers, infinities, and NaN(s).


Yes and no. I think the OP is looking for a strongly typed language where you can’t do any operation on that sum type, forcing you to check results after every operation.


JavaScript does that…


How so without optionals or static typing?


That's usually what NaN is used for (basically "None" for numbers).


This thread is showing once again that reading comprehension is not our strong-suit here on HN.

The author does not once say that Pony's choice is a good one, only that whether it is a good or bad one should be settled by engineering consequences, and that there is no purely mathematical argument that precludes it.

I can't help but think it _is_ a bad idea, because it's easier to overlook a 0 appearing than a NaN. That argument isn't math, though.


The Pony devs did chime in and mentioned that they do define division by zero to be NaN (or positive infinity) where the underlying type supports it (e.g. IEEE 754 floats).

Most integer representations have no space for such a value, so the only choices available to language developers are:

1. Throw an exception or otherwise consider it an error

2. Define the result to be 0 or 1 or some other integer value (0 being the only good choice)

3. Use a non-standard integer representation

Most languages opt for option 1, Pony chose 2, and I've never seen 3, perhaps due to it requiring so much software interference and precluding inter-operation with other languages.


It sounds like from what I have learned elsewhere on here that Julia is doing something interesting here.

https://news.ycombinator.com/item?id=17737308

https://news.ycombinator.com/item?id=17737158


> 0 being the only good choice

Devil's advocate: why not a maximum integer like 18446744073709551615 or 9223372036854775807?

0 looks like a valid number, while 18446744073709551615 is an obvious NaN for any programmer looking at the output.


Pony core team member here.

In some domains it might be. In others it might not. It's no more right or wrong in my mind.

I think the size of the integer needs to be considered in your thinking as well:

If you have unsigned 8 bit integers, the max value is 255, is that really going seem NaN? How about 127 for a signed 8 bit integer?

And, should the result of division by zero vary based on the bit size of the integer? That's a discussion unto itself.


Pony requires special handling of partial functions. Option 1 means all integer divisions by other than constants would be partial.

That seems sub-optimal.


> We’ve now established that if we choose some constant C, then defining division such that x/0 = C does not lead to any inconsistencies.

No, you haven't. You've merely failed to locate any. You've said "I'm not going to prove that this works. I'm going to assume that it does and act as if it did, and place the burden on you to prove otherwise."


Absolutely correct. That said, proof assistants are being used. Here's a quote from TFA:

> Lawrence Paulson, professor of computational logic and inventor of Isabelle: > > [...] This identity holds in quite a few different proof assistants now.


Wrong. Division is not part of the field axioms, it is a defined function, and changing its definition has absolutely no bearing on the consistency of your equational theory.


> No, you haven't. You've merely failed to locate any. You've said "I'm not going to prove that this works. I'm going to assume that it does and act as if it did, and place the burden on you to prove otherwise."

Excellent point. It's not even known if we can get any inconsistencies without making this definition; and it's known that, if it's true that we can't get any inconsistencies without it, then we can't prove that it's true. Building on such possibly shaky foundations can't make them stronger.


(That's not mathematical maundering, by the way; it is a correct if informal statement of part of the incompleteness theorems. I appreciated excalibur (https://news.ycombinator.com/item?id=17736486 )'s point, and wanted to underline the significant, and mathematically precise, difference between "one apparent inconsistency is not present" and "there is no inconsistency.")


It's painful how dumb this blog post is. It's clear that the multiplication is no longer associative with this definition.


This article grinds my gears. He quotes a number of mathematicians that correctly state the undefined nature of 1/0. Then proceeds to interpret that this means that we can choose any specific value to represent as 1/0 that we want. NO. We have "NaN" for a reason and it is an important signal to the programmer that a mistake was made. The language that assigns it to 0 silently is bunk as is this article.


> correctly state the undefined nature of 1/0

Erm, when we say it's "undefined" we mean it literally -- standard mathematical systems of arithmetic do not define a value for that division.

If you make another system of arithmetic you can define it how you want and be consistent with "regular maths" for the operations in which things are defined. It's an extension.

> We have "NaN" for a reason

Funnily enough, IEEE754 defines 1/0 as positive infinity, not Nan. But none of these things are "the truth" in any reasonable sense of the word, just "useful systems".

Defining it as 0 at least means floating point numbers are (presumably) closed under arithmetic operations, which could be handy.


> Defining it as 0 at least means floating point numbers are (presumably) closed under arithmetic operations, which could be handy.

They already are closed. Floating point includes Inf and Nan for that reason. :-)


Ah, of course. Still, this definition gets you to something stricter still -- the "things that normal people think of as numbers" are closed under those operations. (Depending on what happens for overflow and 0/0, I guess.)


Standard mathematical systems of arithmetic do not permit a reasonable definition of 1/0.

Mathematicians define equality of fractions by stating that a/b = c/d if and only if a·d = b·c. This means that if we define 1/0 = 1 then 0 = 1.

To be fair, this is perfectly consistent, except everything in our system is equal to zero.


The story of what happens if we defined, for example, 1/0 = 0 is a little more complicated. I will include it because it is sufficiently interesting.

Equality of fractions is defined differently when zero divisors are allowed in the denominator. A zero divisor is a non-zero number that can multiply with another non-zero number to get zero. For example, if we work mod 12 then 3 is a zero divisor because 3·4 = 0.

If we want to allow zero or zero divisors in our denominators then we say that a/b = c/d if and only if there is some value s such that s·(a·d - b·c) = 0 where s is anything allowable in the denominator. If we are working with the integers, including this s term does nothing because s has to be something that can be a denominator and we only allow non-zero denominators.

So, even if we define 1/0 = 0 then literally every fraction would be equal to every other fraction.

These conventions can be broken (like, for example, addition of floating point numbers is not associative as pointed out by other comments) but it is definitely not "natural". In other fields of mathematics, like measure theory, it is possible to define things like "zero times infinity is zero" which is traditionally undefined but is a convenient shortcut and does not break anything that people working in measure theory care about.

For more: https://en.wikipedia.org/wiki/Localization_of_a_ring#For_gen...


> Mathematicians define equality of fractions by stating that a/b = c/d if and only if a·d = b·c. This means that if we define 1/0 = 1 then 0 = 1.

That's not implied, so far as I can tell.

a / b = 1 / 0, thus

a * d = b * c => 1 * d = 0 * c => d = 0

So all you can say is that d = 0, or at most that c / 0 = 0. Is there some extra step you're taking?


So if 1·d = 0·c, write 1 in terms of the other elements and then evaluate.


IMHO I think the problem here is that most people focus on the "wrong side" of 1/0.

I mean, what the mathematical definition of division says is not that 1/0 is indeed something and that that something is "undefined" or "NaN" or anything else really. What it says is "I cannot do 1/0, the division operation a/b does not apply when b is 0".

So 1/0 is not a thing in itself in mathematics; it's something which cannot be operated.

Now, back some time ago, "division by zero" simply threw an error. It signaled "this is not something that can be done". undefined, NaN, anything else, including 0, is not really something that has a mathematical justification. It's merely a practical approach to encapsulate that error into some form of pseudo-value to control it to some extent.

Personally, I don't really see how 1/0 = 0 is better than 1/0 = NaN or "undefined" or "Infinity".


The practical downside is that it might make equations go slightly wonky, rather than completely wonky if a non-number is returned.

For example, say if a zero divide happens with normalized values. Meaning the immediate result is only off by one at most. Odds of catching that are probably low. Meanwhile, a NaN will infect all numbers that come into contact with it, bubbling up faster...

Under this system, programs are easier to debug if they use bigger numbers... That property does not seem like a win to me.

If I attempt to open a file, but an access fault occurs, I'd rather be told the fault and given a chance to recover than receive an empty file.


I agree with that; in general I'd rather be told too.

But I don't know anything about Pony or it's goals. So it may be preferable for them, I don't know.


Except that 0 could actually be the result of a valid computation. NaN, undefined, or Infinity can not.


And you mean that as saying that it is then better? Or that it is not?

(Sorry, I'm not sure which way you mean it)


It seems straightforwardly better to me, because the programmer/user gets an immediate "you messed up" signal, instead of silently returning the wrong values until someone eventually (hopefully) notices.


Are we reading the same FA? In the section titled "The Real Mathematicians" the author has quotes of mathematicians saying that defining division by zero as zero is OK.


Not all of them, though.

I find particularly interesting Leslie Lamport's comment that "Since 0 is not in the domain of recip, we know nothing about the value of 1 / 0", which I think is the most correct mathematical stance.

Then again, I think it is all a red herring. This (Pony's decision) is not about the mathematical definition of division. This is about the trade-offs computational systems do to manage the situation.


"defining division by zero as zero is OK" And those mathematicians should be classed as wrong. If you treat zero as nothing, a value divided nothing does not make it magically disappear, it should just cancel out the mathematical operation. So 1/0 should be 1, 42/0=42 and so on. Otherwise to apply their logic, a*0=0 would have to be applied. Again if you take a value and multiply it with nothing, it should cancel out the maths operation. It is possible for whole areas of a science to follow down the wrong rabbit whole, medical science best shows this over the last 100-200yrs since scientific equipment has improved and the current thinking about something in the body has changed when new discoveries appear that contradict. Maths is not untouchable either, especially considering the fact Big G and the speed of light varies. Check out Rupert Sheldrakes of Cambridge Uni talks on this subject. The official bodies response was to mandate G and speed of light is now constant when its not.


My two issue with this is it totally relies on a specially constructed definition, and it leads to unintuitive results.

The issue with special definitions is you can use them to say anything you want, turning regular, common operations into weirdness.

What does it mean to take a factorial on the real numbers, or to add only on the even integers? In both cases, we're twisting what are generally accepted mechanics and domains into something else to get a funny result, like 1/0 = 0.

He gets to that point here:

" We can say that division’s domain is all the real numbers except zero. This is what we do in our day-to-day lives, and the way that Agda and Idris handle division. We can choose some value that isn’t a real number, such as “undefined” or infinity, and say x/0 = <whatever>. This is what some mathematicians do with the Riemann sphere. We could choose some real number, like 19, and say that x/0 = 19. This is what Isabelle, Lean and Coq do.

"

So, tiny change in definition --> big change in outcome.

*I'm an undergrad


You may not be familiar but a "factorial on the real numbers" is the gamma function and it has all sorts of useful applications. (it works on real and complex numbers except for non-positive integers)


The concerned reader might wonder how it is possible to assert that there is _one_ correct definition of an extension of the factorial function to the real/complex numbers. Why is the gamma function better than any other extension?

The answer is that the gamma function is the unique logarithmically convex extension of the factorial function.


There ISN'T one correct extension of a factorial function.

The properties of the standard gamma function are great, but some might prefer the properties of Hadamard's or Luschny's alternative gamma function.

Like many arbitrary extensions in mathematics -- be it the factorial function or the division function which deals with 1 / 0 differently -- it's a matter of taste and convenience!

Interestingly the arbitrariness of some mathematical choices seem to unnerve folks on an almost existential level. My guess is that it conflicts with their expectation of capital T truth from mathematics.


> What does it mean to take a factorial on the real numbers, or to add only on the even integers?

I don't think that's exactly the same problem.

It is perfectly valid to define an operation on a particular domain. Defining an add operation on even numbers is perfectly fine (If you had said odd, well...). Factorial on real numbers is more or less the Gamma function. The problem here is that the operation is defined on one domain...

> We can say that division’s domain is all the real numbers except zero.

...but then, for some reason, it is decided that will will be applied on a different domain. That is, that we're going to apply on all real numbers including zero. So now we need to modify the definition of the operation. And the thing is that, the new definition, created to accommodate the new domain, usually tends to produce friction precisely on the new elements added to the domain. We excluded zero initially because that way the operation was defined in a simpler way. Now, having to consider zero implies that our definition becomes more "complex".

Now, we can change the definition in different ways. Is any one of those objectively "better" or "worse" than the rest? Well, that's what the discussion is all about.


Just wanted to share: C++ allows you to overload operators, so you can certainly create your own universe of mathematics if desired.


Edit: I guess there's a reason I'm not a language designer.

I've always thought that programming languages should have a nonzero number class in the vein of unsigned and float and that division should only be defined with a nonzero number class as the denominator. To divide a 64-bit float by another float, you'd have to either specify it as nonzero in the type or convert it somehow. Make division by zero impossible with the type-checker.

Setting your programming language to evaluate x/0 = 0 seems evil. You're taking a bug in the programmers code and then hiding the fact that something logically unsound happened and in a way that would be very difficult to debug or detect.

In fact, this is what happens when C# think it's being clever by returning infinity as the result of a division operation (which isn't even a number). It's a bug that winds up infecting every function that relies on that code without ever throwing an error.

A programming language shouldn't return NaN or Infinity or anything like that when it encounters division by zero. It should demand you use error-handling or ensure it can't happen. If it does happen, it should tell you exactly where the problem occurred and not assume that some arbitrary value will work just as well.


Hi,

I'm on the Pony core team and I generally agree with you. The problem is, if you want to allow people to write high performance code, you are penalizing them becaues, for floating point:

1.0/0.0 is infinity. That's not C# being clever. That's C# following the floating point standard and using the math that the computer provides. To not use that standard means that you make every use of division slower, even if the programmer has in some way assert and otherwise knows that that they won't be dividing by zero.

Note that 1/0 is integer math and for that there is no standard and its undefined behavior. You can't return NaN or Infinity for that without boxing numbers which is a performance hit and would be penalizing programmers who want/need to go as fast as possible.


Anothor option is to use a NonZero type. A NonZero type is one that has been checked to not be zero. The type system ensures that you can only use a NonZero as a denominator in a division. You can minimize the number of checks for zero and the type system still protects you. And at points where performance trumps safetly you can still use a special version of division that returns 0 when the denominator is 0.

How can 1 / 0 = 0 be fast? It's not a standard floating point instruction, so extra logic is needed to convert the Inf to a 0.


I'm curious, what do you think about Julia's approach with its special "missing" value? Does this approach make sense outside the context of data analysis?

https://julialang.org/blog/2018/06/missing


I can't really comment. I don't know how they are doing it and why they made that choice. I assume that Julia was already boxing integers in some fashion at which point this 100% makes sense to me.

Also, given the performance profile that you are seeking to allow programmers to achieve, I think boxing all integers makes sense, then you can give folks protection from integer overflow and underflow and otherwise make how the computer does math more closely approximate what we expect from doing math.

I think that following my above reasoning that only having floating point math can also make sense. That is... in most programming languages 3 and 3.0 are different. Such that 3/ 2 = 1 and 3.0/2.0 = 1.5.

We allow for 3/2=1 because, integer math is faster than floating point math and we want to allow folks to go as fast as possible when doing things like addition, subtraction and multiplication and accept that we can't represent all the numbers that come from division and give an approximation.


Julia doesn't box integers, that would destroy speed. It also doesn't protect from overflow/underflow. It specializes on Vector{Int} so that's it's just a standard vector of integers. Then Vector{Union{Int,Missing}} is stored in memory as a vector of integers with a bitarray declaring missings. Indexing operations are expanded to ifelse a check for the existence and then return the value if it exists or a missing. Then branches are added by the compiler to handle the small unions. This keeps pure integer use without overhead, and gives a small penalty when using missing (which doesn't turn out to be all that bad due to how pipelining is usually done), but it's safe and any type can have missing values.


Thank you. I look forward into learning more about how Julia handles this.


I would like to add, given that this is my favorite talk, that the decisions a language ends up making in things like this are a good reflection of its values. Bryan Cantrill has an incredibly good talk on this that I recommend to anyone reading this:

https://vimeo.com/230142234


If I understand correctly, instead of the usual kind of boxing, they use a separate type tag (one byte) and support union types with up to 255 other "special" enum values. The type tags and data are stored separately, so instead of an array of boxed values, they internally have two arrays, one for type tags and the other for data.

Apparently this can work well with SIMD instructions.


That's pretty cool. There's a lot that a couple of us on the Pony core team like about Julia from our limited knowledge. I'll have to look into that more.


> We allow for 3/2=1 because, integer math is faster than floating point math

Is it? I didn't measured it myself, but from reading Abrash's The Black Book I had got an idea that floating point division is faster on x86 with FPU.


This kind of type becomes really tedious to use unless there is some sort of viable "subtyping"—you certainly want to be able to use a non-zero number where any number works and, ideally, you want to be able to recover a non-zero number when you write a function that takes in a normal number and always returns a positive value. If you're forced to use a bunch of explicit conversion functions to achieve this you add a lot of noise to your code and, generally, the most common function is going to be along the lines of "I know this is positive, so throw an error if it's 0"—which is exactly what we get anyway.

Explicit might be better than implicit most of the times, but it is possible to be too explicit.

That said, we can achieve this, although it's probably a bit trickier than you'd think at first. The best option I know is called "refinement typing", which basically allows you to specify types as subsets (ie "the set of integers which are not 0") and automatically verifies whether your functions actually return a value in the subset.

Liquid Haskell[1] is an example of a system you can use today to play with the idea. It uses an automatic logic solver (Z3) under the hood to automatically prove that your refinements hold. (For example, I believe it should be able to automatically verify that the function f x = abs x + 1 will never be 0.)

[1]: https://ucsd-progsys.github.io/liquidhaskell-blog/


> generally, the most common function is going to be along the lines of "I know this is positive, so throw an error if it's 0"—which is exactly what we get anyway.

That's still a lot better than before. It's not too hard to read a function and be reasonably confident that it will always return a positive value. On the other hand, it's pretty hard to audit an entire codebase to make sure a value can never be negative, or something like that.

The main difference is probably only needing to do that audit once, when writing the function, instead of every time you need to rely on the function's return value being positive.

It would also save a lot of work because a lot of operations _can_ be detected automatically. positive * positive = positive, etc. So you would only have to manually assert in the cases it wouldn't be doable automatically, which would be significantly fewer than before.


Work was done on adding simple value dependent types to Pony as part of someone's academic work, sadly it hasn't made its way into mainline Pony yet.

https://vimeo.com/175746403


I think you’ll find that extremely annoying in programs that do ‘real’ calculations. The compiler will rarely be able to prove that a divisor is non-zero, so you will have to add lots of code to handle edge cases. That can be worth it if you are writing robust numerical code, but I doubt even those writing numerical software libraries would want to handle each edge case.

Worse, division by zero is just the tip of the iceberg. There’s overflow and underflow (which both can happen with addition, subtraction, multiplication, and division) and invalid operations (0/0, √-1, arcsin(2), etc), and you might even want to extend this to detect inexact computations (e.g. to properly report whether you are sure a later division by zero actually is a division by zero)

IEE754 ‘won’ because it is relatively simple to use and many times good enough for real use.


So what type is `integer - integer`? It might be zero, but the type system doesn't statically know whether it is or not. If the result of every arithmetic expression is possibly zero, then you're back to run-time checks for every value. I don't see how this is practically useful.


I would think that the result of a nonzero int minus a nonzero int would be an integer. So f(nonzero x, y) = y / x is fine, but f(nonzero x, nonzero y, z) = z / (x - y) is not.


In Agda you have the choice of either making it truncate at 0 or constrain the domain to (a - b, a >= b) in which case you don't hve to define the function when a < b. This of course is only suitable for theorem proving at the moment, but research is being done on how to have proofs like a >= b without sacrificing performance.


Error handling your infinite results would dramatically slow down your programs. And C# doesn't think it's being clever, it's following IEEE float behavior implemented in the hardware.


I don't think this really solves the problem. It just kicks the same problem over to addition instead.

    let a = 17
    let b = 10
    let c = 7
    let d = a - (b + c) // throws AddsToZeroException


> Make division by zero impossible with the type-checker.

Forcing programmers to consider division by zero when it occurs can be done simply by having the language compiler force handling of a divide by zero exception when division occurs.


> Forcing programmers to consider division by zero when it occurs can be done simply by having the language compiler force handling of a divide by zero exception when division occurs.

Which is what Pony originally did and, lordy that can be an ergonomic nightmare when I know I don't have 0 and can prove it except, I can't prove it to the compiler and the type system.

This problem is why integer division by zero is zero in Pony. Because, I can as a programmer prove that I won't divide by zero, but having a type system or compiler where I can prove it to it that is a computation engine that takes user input is a very non-trivial problem.

The solution in Pony is going to be a combination of "safe math operators" and adding value dependent typing to the compiler.

"safe math operations" would be: *?, /?, -?, +? that will error on division by zero as well as integer overflow and underflow (and have slower performance as well)


Wouldn't it be better to have an explicit unsafeCastToNonZero() function in the stdlib instead forcing invisible unsafe casts in every division statement of every program?


> Forcing programmers to consider division by zero when it occurs can be done simply by having the language compiler force handling of a divide by zero exception when division occurs.

That's materially different than a non-zero type, and IMO it's meaningfully worse. The type checker is most useful when it moves information.


Rust recently added nonzero number types (see https://github.com/rust-lang/rfcs/blob/master/text/2307-conc...). Though I believe the primary motivation here was the reduce memory usage in some special cases.


The complaints from tikhonj about verbosity are on point, but I think you're right about the shape of it. Other things being equal, it's much better to enable the type system to help you move checks further upstream.


You still have runtime issues if you're doing arithmetic with that type and end up getting zero.


Only if arithmetic operators have to return the same type they operate on.


Unlike what the author says, this is the total opposite to a practical/pragmatic solution.

He does not prove that this is a useful representation, only that given his own axioms, this can be considered mathematically correct.

Very practical issues with 1/0 == 0:

- This result is counter-intuitive, took building a custom fields and responding to the incredulity of all. - The main reason this is counter-intuitive is not mentionned in the post: division is no longer monotonous.

1/0.5 = 2 1/0.25 = 4 1/0.0001 = 10000 1/0 = 0

This is calling for an Ariane 5-type crash because an underflow error caused a value to suddenly fall from 10e6 to 0.


> This is calling for an Ariane 5-type crash because an underflow error caused a value to suddenly fall from 10e6 to 0.

That’s the opposite of what happened. A value overflowed, which triggered an exception, which caused a module to emit diagnostic information, which was misinterpreted by other systems as flight data. Interestingly, the value which overflowed was part of a system which was only used until LT+7, which had aleady passed…

In short, if the overflow had simply saturated or wrapped around to 0 or given some other garbage result, the Ariane would have not crashed.


Were are speaking of integers here not FP numbers.


Not to be pedantic, but one of the author's own examples involves pi inverse, so I think discussing FP numbers is valid.


Yes I know, but in the context of Pony, 1/0==0 only applies to integers, not FP numbers. The article doesn't make it clear.


floating point numbers in Pony follow the standard and return Infinity for division by zero. integer division by zero is undefined behavior and something each programming language has to decide how to handle.


The article mentions floating point numbers as well, and the same applies to integers anyway. Only used FP numbers for emphasis.


I think this is calling for a Pony programmer working with floating point to know how their own basic operators function. Wrapping division in a function that checks for zero will give you the proper result you would like to return.


Ah yes, the wobbly-step school of language design.


Hrm ... argumentation by reference to authority (various Ph.D. people) is a sure way to lose a scientific argument.

But that isn't my only qualm.

In the construction of the fields, there is a simple definition of the division function. It is intrinsically the solution of a = r * b, where r is the unknown. If a is nonzero, and b is zero, then r, the ratio, is said not to exist. Put another way, there is no real value of r that can be chosen that satisfies r * 0 = a. See [1] for example, as 0 is not an invertible element of the field over the reals.

So ... I can't say I agree with their choice of 1/0 == 0. They may be free to choose any value they wish, but then from an aesthetic viewpoint, do they really want to surprise the user?

[1] https://en.wikipedia.org/wiki/Multiplicative_group


>In the construction of the fields, there is a simple definition of the division function. It is intrinsically the solution of a = r * b, where r is the unknown. If a is nonzero, and b is zero, then r, the ratio, is said not to exist. Put another way, there is no real value of r that can be chosen that satisfies r * 0 = a.

Exactly. That's the definition of division. Here, he is introducing a totally different definition and claiming it isn't wrong because it is consistent. I mean, by his logic I could define division as a/b := a-b and claim it is consistent and therefore not wrong, but that is obviously not division...


"Mathematics does not give us truths, it gives us consequences."

I'd like to point out that, as I stated in another comment in this thread, the author's supposed refutation of the inconsistency inherent in division by zero within fields is incorrect. Their refutation is as follows:

The problem is in step (3): our division theorem is only valid for c ≠ 0, so you can’t go from 1/0 0 to 1 * 0/0. The “denominator is nonzero” clause prevents us from taking our definition and reaching this contradiction.*

This is a strawman, and it's not the way a formal proof that division by zero in fields is undefined would proceed. First and foremost, the conceptual purpose of the proof is to demonstrate that you cannot define division by zero while still retaining the algebraic structure of a field. Trying to refute this point by stating that the proof cannot make use of division by 0 is begging the question. The whole point is that the proof shows you any way you define division by 0 is going to compromise your definition of a field, or it's going to make fields with nonzero elements impossible.

What the author provided is a very contrived strawman for refutation that belies the actual point. You can define division by zero if you'd like, by using wheels, or extended number systems based on fields (i.e. positive and negativie infinities in the extended Real number system), but you cannot do it using fields. In fact, the modern, axiomatic definition of a field explicitly excludes the unit 0 from the otherwise sane rules of multiplicative inverses.

More generally, I'd like to make a couple of observations from both a philosophical and a practical standpoint. First, mathematics does not give us true statements about the world, it gives us consequences that must follow if we accept various axioms or definitions. You can define division by 0 if you'd like, and you can even do so in a sane and useful way. But you will not have a field. But much more importantly, it's conceptually unsound to base an argument about the practical, programmatic behavior of an undefined operation based on imperfect arguments about abstract mathematics. Technically speaking, computers don't even deal with real numbers. If you find yourself mounting a defense of your programming language's behavior by running through the first lecture of a real analysis or linear algebra course, something has gone very wrong with your enterprise.


> The whole point is that the proof shows you any way you define division by 0 is going to compromise your definition of a field, or it's going to make fields with nonzero elements impossible.

But it does neither. All field theorems pertaining to inverses/division take the form `x ≠ 0 ⇒ ... x⁻¹ ...` None of them is compromised.

> In fact, the modern, axiomatic definition of a field explicitly excludes the unit 0 from the otherwise sane rules of multiplicative inverses.

Exactly, which is why defining 1/0 = 0 (or 42) poses no problem to fields (BTW, the Lean proof assistant defines 0⁻¹ = 0 for fields, even though it's not the inverse and 0*0 = 0).

The whole problem is one of formalization. "Undefined", as it's used in mathematics, is very much informal. It is used to say that a certain expression, 1/0, while "grammatically" correct, is meaningless. Formal systems simply cannot do that: the expression 1/0 must either be ill-formed (can be done with dependent types but is often inconvenient) or it must mean something in the semantic domain of the language. Different formal system define what that something is, but whatever it is, it is no longer "undefined" in the informal sense.


> The whole problem is one of formalization. "Undefined", as it's used in mathematics, is very much informal. It is used to say that a certain expression, 1/0, while "grammatically" correct, is meaningless. Formal systems simply cannot do that: the expression 1/0 must either be ill-formed (can be done with dependent types but is often inconvenient) or it must mean something in the semantic domain of the language. Different formal system define what that something is, but whatever it is, it is no longer "undefined" in the informal sense.

We're substantially in agreement here. This circles back to my core thesis - there is no point in defending the implementation of something mathematically undefined in a programming language by retreating to the formal axioms of fields (or any other algebraic structure, for that matter).

Your other point about 1/0 = x for whatever you'd like x to be is something I mentioned in my comment. The extended real number system does something similar by defining arithmetic of positive and negative infinities. But the extended real number system is not a field. It's a fine and useful system, but you insert pathologies into the real field by trying to accept infinity as a real number. Likewise if you want 1/0 to be equal 42, that's fine. But you're going to "infect" everything that 42 touches in the process. You won't compromise arithmetic, but you will compromise the definition of a field.

What I'm getting at is that 1) mathematically speaking, the author's entire preamble about division by 0 is both incorrect and irrelevant, 2) we should not be trying to justify implementation behavior in systems which cannot use real numbers through the wizardry of field theorems.


> But you're going to "infect" everything that 42 touches in the process

How so? Remember that, e.g. ∀x . 0x ≠ 1 still holds, and it is not true that 0((1/0)(1/42)) = 1 because associativity doesn't apply, because associativity for division stems from the existence of a multiplicative inverse exists, and none does even though 1/0 = 42.

> You won't compromise arithmetic, but you will compromise the definition of a field.

Not at all. Formal systems of mathematics define fields (in general, not just in arithmetic) specifically with 1/0 = 0. Can you show a single field axiom or theorem that would be affected by this?

Perhaps you could if you insisted on stating a theorem by using an informal term such as "wherever foo is defined," but 1/ those theorems can always be stated in more precise terms that don't make use of the informal notion of definedness, and 2/ very few formal systems formalize definedness in a way that means an undefined value must not be 42 (i.e. where you can actually prove 1/0 ≠ 42 or where 1/0 is ill-formed) -- see Feferman https://math.stanford.edu/~feferman/papers/definedness.pdf -- and AFAIK, such systems are very, very rarely used in practice.


At this point I've written a number of comments throughout this thread which explain why division by zero is not possible in a field - specifically, a field which contains nonzero elements. You are asking me to show you an axiom or theorem proving this, but I don't know how else to explain this to you since I've already explained it in a number of different ways. In a sibling comment I even write out the formal proof, and in others I've explained why the author's refutation is incorrect.

I'm not sure what else to say. The author is wrong - it is not at all controversial among mathematicians that the algebraic structure of a field cannot support division by zero. Perhaps I'm simply a poor teacher - in that case I would refer you to [1], [2] and [3] for a better explanation. As I've said elsewhere, you can make a coherent algebra by extending a field - then division by 0 or infinity is possible. But it stops being a field - just as it cannot be many other algebraic structures. Therefore I feel very confident stating that 1) the author is wrong, 2) I've sufficiently explained why in these comments and 3) the burden isn't really on anyone to disprove that division by 0 works at this point.

In any case, this entire discourse is ridiculous because the author should not be trying to defend programming language decisions using abstract mathematics. This kind of nuance and ambiguity is neither necessary nor relevant for justifying an implementation of defined behavior for division by zero. The whole exercise is a farcical distraction, to put it bluntly. Mathematics is pedantic by design, but this sort of definitional rigor is laughably unneeded and for defining integer and float-based arithmetic. The author does a disservice to what is an otherwise reasonable point by (incorrectly) justifying it with axiomatic field theory.

It's self-indulgent over-engineering to try to prove division by zero in fields "works, no really!" from first principles for a programming language blog post; especially when mathematics is essentially united in saying, "no actually, it doesn't."

__________________

1. https://www.reddit.com/r/math/comments/3b5i6p/can_you_divide.... 2. http://mathworld.wolfram.com/DivisionbyZero.html

3. https://en.m.wikipedia.org/wiki/Division_by_zero


If you mean this, https://news.ycombinator.com/item?id=17737661, then it contains a mistake, which I pointed out in a reply.

The references you linked to are irrelevant as they refer to informal mathematics. Of course you can't meaningfully divide by zero (in the sense that the value of division corresponds with our intuitive understanding of what division is). But when it comes to formalization you must assign some meaning to 1/0 or work out a complex system where it becomes ill-formed (i.e. a "syntax error"). The only question is whether that meaning -- that must exist in formal mathematics even though it should not in informal mathematics -- causes an actual contradiction or not, even though it doesn't "make (an informal) sense" either way.


I don't know what to tell you. You're continuing to argue but you seem to not be following my point. I've already long since told you we're in agreement that you need to account for syntactically possible but theoretically impossible algebras for computation. In so doing I have gone further and repeatedly stated that because division by zero is not actually possible in fields, the author should not have tried to use such a rigorous system to justify the practical, syntactical requirements of a programming language. It doesn't make sense to try and refute mathematical theory as the preamble of a programming language manifesto.

What exactly are you looking for me to say here?


> What exactly are you looking for me to say here?

Very simple. Here is a theory of a field in FOL:

    ∀ x . x + 0 = x
    ∀ x . ∃ -x . x + (-x) = 0
    ∀ x, y . x + y = y + x
    ∀ x, y, z . (x + y) + z = x + (y + z)
    ∀ x, y . x – y = x + (-y)
    ∀ x . 1x = x
    ∀ x . x ≠ 0 ⇒ ∃ x⁻¹ . xx⁻¹ = 1
    ∀ x, y . xy = yx
    ∀ x, y, z . (xy)z = x(yz)
    ∀ x, y . y ≠ 0 ⇒ x/y = xy⁻¹
    ∀ x, y, z . x(y + z) = xy + xz
Do you claim that this formalization is unacceptable (i.e. it does not precisely and fully define what a field is)? If so, why? If not, can you write a single theorem in this theory that would be falsified if I added the axiom, ∀ x . x/0 = 0 ?

If your answer to both questions is no, then your only point may be that the extended definition of the symbol / does not deserve the name "division" because it does not conform to our intuitive, informal notion. That's fine but is a long way from claiming that there is no acceptable formalization of fields or that division by zero introduces a mathematical contradiction or breaks equational laws. Note that you cannot say that the original theory defines a field while the extended one doesn't, because the latter's models are a subset of the former's.

This then leaves open the matter of whether such a philosophical, aesthetic consideration trumps pragmatic ones or vice-versa, to which I don't think anyone has a universal answer, and it is not the issue, anyway.


In the definition of a field ∀ x . x/0 = undefined, not 0 or any other value you might prefer.

A field is exactly defined by the field axioms: adding or removing any other axiom makes it no longer a field.


> A field is exactly defined by the field axioms: adding or removing any other axiom makes it no longer a field.

This is clearly not true. Adding axioms (as long as they don't introduce consistency) can only reduce the set of models. Anything that satisfies the larger theory also satisfies the smaller theory. This is exactly why a group is a monoid is a semigroup. This is why fields are rings. In fact, this is why the rationals or the reals are fields: they satisfy the field axioms, and more (for example, they are ordered).

> In the definition of a field ∀ x . x/0 = undefined, not 0 or any other value you might prefer.

What is "undefined"? First let's look at the logic itself: What is true ∨ undefined ? What is true ∧ undefined? Now at the theory: What is undefined + x ? What is 0 * undefined ? What is 0 = undefined ? etc.[0]

While it is possible to formally define this magic non-value (sometimes written as ⊥ in the logics that contain it) as is done in some programming languages (and it has been done), this would entail adding quite a few more axioms to FOL and to the theory of fields. When people say FOL, they mean a particular language[1], one that is considered by mathematicians to be sufficient to formalize all of mathematics and has very particular syntax and semantics. FOL + undefined is a very different language, which much more complicated semantics. If you think that when people refer to FOL they refer to FOL + undefined, I challenge you to find any mention of it in descriptions of standard FOL. Similarly, if you think that by common theories, say fields or sets, people really mean field + undefined or ZFC + undefined (what is {undefined}? What is undefined ∈ {undefined}? etc.), I challenge you to find any mention of the complex axioms required in treatments of these common theories.

But it's not just a matter of convention. The reason you won't find it in the standard logics/theories is that it's completely unnecessary. If you work it out, you'll find that the theory of fields I provided and the more complicated one involving undefined that you suggest is the common formalization are the very same theory, in the sense that they yield exactly the same theorems, except for those specifically involving undefined (i.e., your theory has more theorems than mine). You'll find that you cannot find a "bad" theorem that you can prove with the simple theory but cannot with the one involving undefined, and therefore it is unhelpful except for the purpose of satisfying a certain desire for intuition that requires significantly complicating the formal system.[2]

There's a paper by Sol Feferman[3] reviewing formal systems with explicit handling of "undefined." AFAIK, they are rarely if ever used. Such semantics are certainly not part of the trusty-old FOL, considered the default language of formal mathematics.

[0]: BTW, if you think that the meaning of every expression involving undefined is undefined, then you'll see that this doesn't work. For example, you have: x=0 ⇒ x(1/x)=1. This is a valid formula, and so it's true even when x=0, but then you have undefined on the right-hand side, so you want at least false ⇒ undefined to be equal to true. But A ⇒ B = ¬A ∨ B, which means you want at least true ∨ undefined = true. Similarly, you'll want at least 0 = undefined to be false etc.

[1]: https://en.wikipedia.org/wiki/First-order_logic

[2]: E.g., while in your theory you will be able to prove the rather useless theorem 1/0 ≠ 0, in my theory you will not be able to prove 1/0 = x for any x, so it really poses no issue.

[3]: https://math.stanford.edu/~feferman/papers/definedness.pdf


>This is clearly not true. Adding axioms (as long as they >don't introduce consistency) can only reduce the set of >models. Anything that satisfies the larger theory also satisfies the smaller theory. This is exactly why a group is a monoid is a semigroup. This is why fields are rings. In fact, this is why the rationals or the reals are fields: they satisfy the field axioms, and more (for example, they are ordered).

Look at for example the set of real numbers. If you add the complex numbers to it, you can no longer speak of the real numbers. The real numbers form a subset of the complex numbers. The same thing happens if you add definitions to a field. You create something which has mayber has a field as a subset, but you can no longer speak of a field.

What is "undefined"? First let's look at the logic itself: What is true ∨ undefined ? What is true ∧ undefined? Now at the theory: What is undefined + x ? What is 0 * undefined ? What is 0 = undefined ? etc.[0]

Undefined is just undefined, no value, not a magic one, or just a random one, or one you prefer. Just void, the empty set. To understand it, look for example at the graph of y = sqrt(x), where x and y are real numbers. You see no points at negative x. Its undefined in the domain x < 0. Yes you can say at x < 0, y = 3, that is now my definition sqrt(), but then you no longer have the sqrt function mathematicians talk about. The same thing applies to fields and 1/x.

>What is "undefined"? First let's look at the logic itself: >What is true ∨ undefined ? What is true ∧ undefined? Now at the theory: What is undefined + x ? What is 0 * undefined ? What is 0 = undefined ? etc.[0]

Now you apply boolean logic to the result of calculations, numbers and non-numbers, not on statements. I can also ask what is 5 AND true? Is that true? The question doesn't make sense, so the answer is: depends on the programming language.


> Look at for example the set of real numbers. If you add the complex numbers to it, you can no longer speak of the real numbers.

What does this have to do with adding axioms? You don't get the complex numbers from the real numbers by adding axioms. For example, the following is provable for the reals, but not for complex numbers:

    ∀x,y . x < y ∨ x > y ∨ x = y

> You create something which has mayber has a field as a subset, but you can no longer speak of a field.

No. The class of all fields are all objects satisfying the field axioms. If you add axioms, you get get a subset of those fields. Each and every one of them is a field.

> Undefined is just undefined

I'm not looking for handwaving. Formal mathematics is math that can be done mechanically. Write down the axioms for undefined. Again -- I'm not saying it hasn't been done, but it's certainly not part of standard first-order logic, and it is unnecessary.

> To understand it, look for example at the graph

I understand what undefined means in informal mathematics. Try to see what it means in formal mathematics. But the important point is, try to understand why it is unnecessary in formal mathematics in most cases.

> Now you apply boolean logic to the result of calculations, numbers and non-numbers, not on statements.

No. You can write 1/x < 5 ∨ x > 20.

> The question doesn't make sense, so the answer is: depends on the programming language.

There is no such thing as "doesn't make sense" in formal math, even though there is such a thing in informal math. That's the whole point. Either the expression is ill-formed, i.e. not in the language or a "syntax error", or it must make some sense.

This is why it's useful and easy to say something is undefined in informal math, but not as useful and not as easy to do that in formal math.


>What does this have to do with adding axioms? You don't get the complex numbers from the real numbers by adding axioms. For example, the following is provable for the reals, but not for complex numbers:

> ∀x,y . x < y ∨ x > y ∨ x = y

Since R is a subset of C, you can write C as R with additional axioms. See for example:

http://www.math.mcgill.ca/gantumur/math249w15/numbers.pdf

>I'm not looking for handwaving. Formal mathematics is math that can be done mechanically. Write down the axioms for undefined. Again -- I'm not saying it hasn't been done, but it's certainly not part of standard first-order logic, and it is unnecessary.

Undefined is just undefined, that is no handwaving, it is a primitive notion. See https://en.wikipedia.org/wiki/Primitive_notion

You are trying to define the undefined in a formal system. Undefined is just the absence of a definition. Not 0, 3 , 2pi

> No. You can write 1/x < 5 ∨ x > 20. Yes, those are valid mathematical statements. 5 ∨ TRUE, or undefined V TRUE, I doubt it.

> There is no such thing as "doesn't make sense" in formal math, even though there is such a thing in informal math. >That's the whole point. Either the expression is ill-formed, i.e. not in the language or a "syntax error", or it must make some sense.

Well in language, in which I am corresponding with you, there is such a thing as 'makes no sense'.

You are somehow trying to capture everything in your logical system only to try to prove that 1/0 = 0 is part of a field, which it isn't. I've made my point here, it was nice talking to you.


> Since R is a subset of C, you can write C as R with additional axioms.

That's not even remotely how it works. You may want to read up on logical theories and models[1].

> Well in language, in which I am corresponding with you, there is such a thing as 'makes no sense'.

Yes, and for the same reason we can have such a thing as "undefined" (that means more than merely 'not specified') in informal mathematics -- because both English and informal math are informal. But we are talking about formal languages[2], which do not have such a thing.

[1]: https://en.wikipedia.org/wiki/Theory_(mathematical_logic), https://en.wikipedia.org/wiki/Structure_(mathematical_logic)

[2]: https://en.wikipedia.org/wiki/Formal_system


It seems like he covered that in the article. There is no multiplicative inverse of zero.

But that isn't the same as defining a division operation.


He didn't cover that in the article, or at least not in a way that actually supports his point. Since there is no multiplicative inverse of 0, division by 0 is undefined behavior. Trying to remediate that by refuting the "proof" that it's undefined while still asserting that we can define it as something else is mathematically incoherent.

If you want to see why it doesn't work from a number of different angles, read through the /r/math thread[1] or the Wolfram MathWorld blurb [2].

The formal problem comes down to one of uniqueness of elements. If you define division by zero in a field, you must end up in a place where it follows that every element is equal to every element. Then you only have one element - 0. So really we should be stating that division by zero is undefined behavior in fields with at least one nonzero element.

But again - we can neatly sidestep all of this, because trying to defend your choice of undefined behavior in a programming language based on the abstractions of field axioms is silly. In the actual world we don't even deal with real numbers computationally, let alone the particular nuances of whether or not our programming language implements number systems composing a field or any other algebraic structure.

__________________________

1. https://www.reddit.com/r/math/comments/3b5i6p/can_you_divide...

2. http://mathworld.wolfram.com/DivisionbyZero.html


> Trying to remediate that by refuting the "proof" that it's undefined

He does not refute any proofs by citing that division by 0 is undefined. He refutes them by asserting that the multiplicative inverse doesn't exist. These are very different statements.

Neither the standard field nor his modified field use the zero inverse, 0⁻. The proofs he's criticizing do erroneously use 0⁻. That's what he's calling out, I believe.


> He refutes them by asserting that the multiplicative inverse doesn't exist.

Yes - but in algebraic fields division by x is equivalent to multiplication by 1/x. This is precisely why you cannot have a field that admits division by 0: because 0 has no multiplicative inverse.


Maybe all this talk about fields is a distraction? Integer arithmetic isn't a field anyway. Other than 1 and -1, no integer has a multiplicative inverse that's an integer.

Integer division just isn't the same operation as division on rationals or reals. The same laws don't apply.

(For floats, division by zero can return Inf, -Inf or NaN and there's no reason to define it differently.)

It looks like the theorem-proving languages that define 1/0 to be 0 tend to be using natural numbers as a fundamental type. Not only do they define division differently, they also don't have negative numbers, and so define 2-3 to be 0.

https://coq.inria.fr/library/Coq.Init.Nat.html


Yes! Exactly, it's a distraction. There are reasonable, well-intentioned reasons to argue that division by 0 should be an acceptable operation in a computational setting. Computational settings need not admit all the rigor of theoretical math. I wish the author had not tried to involve field theory and programming like this because it detracts from the point :)


First, mathematics does not give us true statements about the world, it gives us consequences that must follow if we accept various axioms or definitions.

Isn't a consequence in itself a true statement?


Allow me to clarify that point. The idea is that - conceptually speaking - mathematics cannot be used to tell you empirical axioms about the world in which we reside. All it can do is tell you what must be true given certain well-defined assumptions. It is a phenomenally powerful tool for proving things about the world from empirical axioms, but which empirical axioms we choose to rely on in engineering is mostly the domain of physics or the sciences.

The underlying point here is mostly a philosophical one, but it has some bearing on the matter at hand. In effect, the definition (or lack thereof) for division by zero in fields is of no practical consequence for the real world impact of implementing an operation which admits division by zero. I can define an algebraic structure in which division by zero is sane (it is not in fields, despite what the article states!), or I can define an algebraic structure in which division by zero is insane. Both can be coherent, consistent and genuinely useful.

But whether or not I can define something that works has nothing to do with what happens "when the rubber meets the road", so to speak. It was a mistake to open up an argument about programming division by 0 using the field axioms in the first place. There is no "one truth", there are only facts which must follow as consequences from assumptions. This is especially the case for programming, considering that computers are fundamentally incapable of working with real numbers in the first place.


They're true statements about the mathematical system. They might have no relevance to reality.


I'm not seeing how the tweet is mocking the Pony developers. It seems to mirror the tone and the content of of the documentation in the screenshot. It looks like an absurdist spin on the ivory tower vs industry meme, and it doesn't make anyone the butt of the joke.


The screenshot in question cut off the explaination of why. I'm one of the Pony developers and I took it to be mocking.


Thanks for replying! I took a look at the thread in question and you're totally right. In context, this is unambiguously snide. I've been fortunate that my learning and working environments have been absolutely free of this kind of mean-spirited nonsense, so it really took me by surprise.


I mean sure Pony, but as a programmer this result would surprise me quite a lot, which I tend to view as a bad thing.

https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...

However:

> One of developers of Pony reached out to me. They’re planning on writing a more in-depth explanation of why Pony chose 1/0 == 0, which I will linked when available. As I understand it, it’s because Pony forces you to handle all partial functions. Defining 1/0 is a “lesser evil” consequence of that.

I am looking forward to this write up.


Based on the tweet, I don't expect it to be very enlightening. Since programmers write programs to solve real problems, and real problems define 1/0=undefined... then this is only going to mask bugs in those programs.


Technically, some real problems define 1/0 as illegal.

But see renormalization. https://en.wikipedia.org/wiki/Renormalization?wprov=sfla1


> programmers write programs to solve real problems, and real problems define 1/0=undefined

Saying that is easy and common. Can you come up with a contrived example?


It's quite staggering how many people have posted on this thread without reading the article. Nearly all the objections people have raised are directly addressed in the post.

For those who still object to the argument, would you object to me defining the piecewise function f:R->R defined to be 1/x for x =/= 0 and 0 for when x=0?


Nearly all the objections people have raised are directly addressed in the post.

Are they? I see nothing in the article about the negative consequences of surprising users with silent failure, for example. Or the fact that it makes little sense in real-world scenarios.

would you object to me defining the piecewise function f:R->R defined to be 1/x for x =/= 0 and 0 for when x=0?

What do you mean by objecting to you defining a function?


The post is not about whether this is a good choice for its users; the post is about whether there are nathematical objections to it. The author even writes that they don't agree with the engineering decision to define it like that, but they can't object with an appeal to mathematics. It lists three valid choices for handling division by zero and Pony picked one of them. My personal favorite is to shrink the domain but that's not supported in most languages and those that support it are often not meant for number crunching.


> "see nothing in the article about the negative consequences of surprising users with silent failure, for example."

That's because the article is purely about the mathematical consistency of using 1 / 0 = 0. It explicitly and repeatedly mentions that it's not about real-world logistics.

> "What do you mean by objecting to you defining a function?"

The author is defining the division function with this special mapping, f, which includes 0 in the domain. Damn near everyone in this thread is confusing that definition with defining the multiplicative inverse, 0⁻.

It's an understandable confusion, but also honestly getting pretty frustrating. That's what he or she is getting at.


What proportion of the time will a programmer intend and expect 1/0 = 0?

What proportion of the time will a divide-by-zero operation be a symptom of a bug, where evaluating 1/0 as any valid number will make it harder to identify that bug?

There might be a mathematical justification for 1/0 = 0, but the computer science justification seems tenuous at best. The overwhelming majority of the time, I want divide-by-zero to throw an exception or evaluate to NaN. I'd far rather deal with the edge case of intentionally dividing by zero, rather than the not-at-all-uncommon case of unintentionally dividing by zero and getting unexpected behaviour as a result.


We’re talking here about integer math. Integer math on CPUs is usually fast at the expense of having coherent semantics. It isn’t even really “integer” math in any mathematical sense. It’s just “the fastest math that the CPU is capable of doing, which happens to look a lot like integer math most of the time.”

If you’re dividing floats, you will get NaN or Infinity as expected. Just like how, if you overfloat a float, you’ll get Infinity. Floating-point has well-defined semantics, which CPUs are required to adhere to, and languages just expose.

If you divide “CPU integers” by (CPU integer) 0, the result is undefined. Just like how, if you overflow a “CPU integer”, the result is undefined. There is no standard semantics being adhered to. There is no equivalent of IEEE754 for CPU integers. There is only convention, and convention has no universal answer at these edge-cases. Thus, a language is free to do whatever it wants in response to either. There are no formal semantics to expose.

Usually, a language that calls itself “safe” will choose to make CPU integers behave a lot like real integers, by generating checks and throwing exceptions for the edge cases.

And usually, a language that calls itself “low-level” will just expose whatever semantics the CPU integers of its host platform already possess. In the case where there’s still an impedance mismatch (i.e. in the case of division by CPU-integer-0 where you don’t actually get any output into a result register), the language has to make something up. To “go with the flow” of how CPU integers work, probably it should be something fast.

Thus, 1/0=0 kinda sorta makes sense. It is an operation on the field of “CPU integers” that vaguely “fits in” with the existing ones. No mathematical relevance, but just fine from the perspective of someone e.g. seeking to create a higher-level abstract machine targeting the Pony compiler, where you’d still just insert a divisor check before doing the division op if you wanted the result to make any sense.


Thanks for this explanation.


I very strongly object to this convention, as both a programmer and a mathematician.

A basic theorem is that for any field we must have that a/b = c/d if, and only if, ad = bc. See theorem 2 in this classic text here for a proof[1]

So if 1/0 = 0, then we also have 1/0 = 0/1, which implies 1 x 1 = 0 x 0 which is a contradiction. But then we also have 1/0 = 0/x for any x, hence we also have shown x=0 for any x, which is an infinite number of contradictions. Is this a problem which is addressed in the post that I missed?

By the way, I have spent more time than it's polite to discuss in public studying the topic of dividing by zero, and also what 0/0 means. If you are also interested in this topic, the best youtube video is by Mathologer and I highly recommend it to everyone. See here [2]

[1] https://books.google.com/books?id=3ApEDwAAQBAJ&lpg=PP1&pg=PA...

[2] https://www.youtube.com/watch?v=oc0M1o8tuPo


So... if I do:

   x = a/b + c/d + e/(f*g/h)
I need to check:

   if(b == 0 || d == 0 || f == 0 || g == 0)
rather then:

   if(isFinite(x)) 
Or whatever function/equality check is appropriate, or a try/catch if it throws an exception.

I have to say that using a single check on the result seems significantly less prone to bugs then having to check all the values that could produce an invalid result.


Without NaN you don't get to see that the computation "went wrong". But that's not necessarily a problem, and only if it is must you check whether those inputs are zero.

Still, I agree that a NaN value(s) is(are) preferable for this reason.


Of course, there will be cases where you need to check some of the initial values regardless, but often you just care if the result is valid. From a defensive programming stand-point, if you don't expect any of those values to be zero, you just check for NaN at the end rather then carefully reviewing your calculations for which values you have to check.

I would REALLY hate having to write a geometry library without NaN values. Both because of error checking, and the fact that NAN/INF is often the 'correct' answer when it's the result of a calculation.


The problem being that constructing isFinite(..) for machine integer types is basically impossible.


This is very bad for numerical code. When you divide by almost zero you get a big number. If it becomes a zero it should be even bigger number. Having 1/0 = +inf is a perfect solution. Setting it to 0 breaks continuity and isn't useful at all.

One of those proposals which can only be made when you don't take time to understand and appreciate use cases the current solution was created for.


> Having 1/0 = +inf is a perfect solution.

It makes sense if we have signed zeros; then +0 represents an infinitesimal positive value. IEEE 754 has signed zeros, though the signs may sometimes not match programmers' expectations, e.g. 1 - 1e-50 - 1 == +0.

Maybe 1/+0 == +inf is reasonable, but would you agree that there's no reasonable answer for 0/0? So IMO, reasonable languages must define division as a partial function, or a function that can throw an error (or return NaN, Nothing, etc).

And if division is a partial function anyway, it seems best to not allow division by zero at all, since even 1/+0 == +inf can have unexpected consequences.


> Lawrence Paulson:

> > These things are conventions, exactly the same as announcing that x^-n = 1/x^n and that x^0 = 0.

Say what? x^0 is 1, and not by convention, other than in the 0^0 = 1 case.


I was thrown by that too, maybe it was a typo?


FWIW Pypy used to share this behaviour. It was clearly a bug, as it doesn't follow CPython convention.

It's around since 2013 and was found in a unit test coverage improvement effort in Feb. 2018.

Given that pypy is mostly used for hard-core math and scientific computations, how the hell this bug wasn't found earlier by a user ?

My bet is that ZeroDivisionErrors are very rare. Do you even remember a ZeroDivisionError in production code ? I don't.

1/0 is my dirty little trick to add a breakpoint when I'm too lazy to launch a debuger. Why does this work? Here again, because nobody never catch ZeroDivisionErrors, because they don't happen.

So, what a fuss for such a tiny convention. Granted, it breaks the math correctness, as do ±Infinity. CPU integer math is broken anyway. In what world does 2^31 - 1 + 1 == -2^31 ?

I took a fairly large, used and old repository, Django and I search for ZeroDivisionError:

https://github.com/django/django/search?q=ZeroDivisionError&...

guess what:

- 4 occurrences in the tests

- 3 occurences in the issues

- last, but not least 1 occurrence in the code:

   except ZeroDivisionError:
       result = '0'


Oh sure, there are infinite numbers of matrices without inverses and no one cares.

But you try to slide one little real number without an inverse by and everyone freaks out.


> We’ve now established that if we choose some constant C, then defining division such that x/0 = C does not lead to any inconsistencies.

There is no proof in this post that 1/0 = 0 maintains consistency. Rather, it contains refutations of one or two arguments that claim inconsistency, along with appeals to authority.


You're right. It really would've been nice for the author to offer a direct proof.

However, while I'm a bit rusty, I think it basically has to be true. For there to be an inconsistency, 1/0 = 0 must either

a) imply the negation of some previous theorem of arithmetic or

b) imply that 1/0 = x, for x != 0.

I think a) can only be true by way of b), since no existing theorem of arithmetic involves the expression y/0 for any y. I confess, I don't know the right way to prove that b cannot be a consequence, but it doesn't look like one.

Edit: I think it's just trivial to take a model of the existing axioms, then add n/0 = 0 to the division relation.


You have to add extra "except when"s to every theorem involving division. E.g.:

(a+b)/c = a/c + b/c

(in particular, for C != 1, as the author claimed it would work not just for 0 for for any real number)

Now to say this is true you have to say "unless c = 0", whereas before that was automatic from the definition of division.


The same theorem

forall c != 0, (a+b)/c = a/c + b/c

is true before and after.

It may be helpful to observe that while using the same symbol '/' in both cases is confusing, mathematical theorems are not about symbols, but about particular mathematical objects. The old theorem is about the division relation defined for real numbers != 0, the new one is a relation defined for all real numbers, that just happens to coincide.


That is exactly my point. Now you have to disambiguate a poorly-chosen symbol.

The theorem doesn't need the stipulation that c != 0 since you have already excluded c from the domain.


The explanation in the article basically boils down to the following:

let a/b mean, in my programming language's syntax, = { a divided by b when b is not zero, or 0 otherwise.

Yeah, it's not "mathematically invalid," but it punches a hole in the fidelity between numerical methods and the math that they approximate.


I knew that but my friend didn't. So thank you for explaining that for my friend.


This is "Worse is Better[1]" engineering at its most infamous. In order to make the interface consistent and simple you take the exceptional case and just make it work the same as all the other cases. It goes with the "Worse is better" credo that "it is slightly better to be simple than to be correct". Also "it is better to drop those parts of the design that deal with less common circumstances than to introduce either complexity or inconsistency in the implementation". So throwing an exception on division by zero or returning some complex thing like NaN is too much complexity, so we just return 0.

[1] https://en.wikipedia.org/wiki/Worse_is_better


Let f(x) = 1/x, then (from a mathematician’s perspective) the limit of f(x) as x->0 doesn’t converge and is representeted as being infinite, definitely not zero.

As the denominator gets smaller, the result is bigger and bigger and bigger. The closer the denominator gets to zero the larger the result. Why would we want to pick an answer that is small when instead it should be larger than any number?

One might argue that 1/0 should be MAXINT or IEEE 754 plus infinity due to the limitations of our hardware, but I’ve never wanted 1/0 to result in zero in any program I’ve ever written.

I would prefer that it be treated as an error or possibly some reserved value that behaves like INF while still being an integer in the programming language.


Well, you have an ambiguity as to whether it should be +INF or -INF, depending on which side you take the limit from. Arguably, 0 is more reasonably because it preserves symmetry. Of course, it would be far more reasonable to define this as an exception, because you shouldn't be able to do division by 0.


First the difinition of given of a field is wrong https://en.wikipedia.org/wiki/Field_(mathematics) . But why not? the author might not know but 1/0 is by definition the inverse of 0 and that s why 1 * (1/0) = 1. What he do is extanding the definition of the division FUNCTION. And you can always do that. But by doing that you lose most of algerbaic and arithmetical propreties for numbers. But worst! There exists 1/ 0 = infinity, but also infinity * 0 = 0(in probability). So please stop creating fake controversies about stuff that really doesn t need ANY DEBATES.


> But is Pony doing something unsound? Absolutely not. It is totally fine to define 1/0 = 0. Nothing breaks

a=x/N

b=y/N

If a==b, then x==y

No longer true, with this change. Essentially a variety of mathematical properties of numbers in the Real space don't hold when you allow division by 0.


“Nothing breaks” in that it isn’t unsound, in that you can’t use it to prove a falsehood. Not that you can’t use the nice little shortcuts you’re used to.

In practical terms, that _does_ have an impact, because people might use those shortcuts without realising the system they’re operating under doesn’t allow it.

But you picked the wrong quote to make that point under.


To elaborate on the 'nice little shortcut' you mentioned, here it is:

If a==b and N≠0, then x==y

EDIT: Formatting


No, that is not necessary.

"Given an x, y, and N such that x/N == y/N, it is true that x == y".

There is no need to say N != 0 under the ordinary definition of division.


This is not true if you say division by 0 is undefined or +inf either. It really is totally fine.


No, you are assuming that 1/0= "undefined" where "undefined" is a magical value. It's not. It literally means 'That operation has no definition, and there is no reasonable result to return'.

Many languages handle this in a practical way by creating a special value "undefined", "NaN", or others that has special properties. But let's be clear that "0" is a normal number in the real number space, as so expectations about properties of real numbers are not going to hold up.


When there is no reasonable result to return does it really matter what result is returned? 1 / 0 == 0 is about the same as 1 / 0 == 3. To avoid this just don't do 1 / 0 in your program.


> 1 / 0 == 0 is about the same as 1 / 0 == 3

Sure, and both are bad.

> To avoid this just don't do 1 / 0 in your program.

If you're going to avoid doing it, better to have it throw an exception if you do it by accident.


well, as long as you aren't doing calculus


I think this is the crucial bit. Division by zero in fields means nothing, but if you want to tack on calculus (part of pretending that doubles are reals) then zero is a perfectly good limit for division and it turns out that defining it as anything at all breaks your system.

I didn't like the blogpost, because it's smug, and doesn't even mention this obvious objection.


I am not really a programmer or mathematician, so take this for what it is, but to me, the statement makes linguistic sense, if you read it out like a first grader.

If I divide one thing into zero groups, how many items do I have in each group? Zero. Edited per comment below.

If I were really pedantic, I'd say that in the above, it's almost like the input "1" is not being divided, but grouped.

The funny thing then, is that if I am grouping instead of "dividing", then if I enter 1 modulo 0, I'd expect to get 1. i.e. the modulo zero should be an identity function: 1%0 = 1, c%0 = c.


I think you mean if I divide one thing into zero groups, how many items do I have in each group


Edited, thanks!


I must admit I prefer 1/0 not to be 0 in the "Common Algebra". But that might be because I come more from an Natural Science/Engineering background where continuity is something that is expected virtually everywhere. Not even Theoretical Physics text books mention that continuity of functions is required as it is so ubiquitous. 1/0000000.1=10000000, 1/0.00000001=100000000.0, ..., 1/0=0 would break that. In fact we have even harder requirements usually we want everything at least twice differentiable...

However, nobody stops people from defining their own Algebras or Fields on languages that support it. On C++ this should be a no-brainer, if the sentiments of the article is true that consistency is no problems. Being a "man of industry" myself, 1/0 throwing or at least becoming infinity doesn't seem a problem to me. Similar to not using gotos or so.

FWIW, JavaScript an underrated language when it comes to weird corner-cases does the right thing. It doesn't throw an error, instead the result becomes Infinity. When I multiply it with a finite number it stays Infinity. When I multiply it with 0 it becomes NaN which is totally sound because in a real-world application one could come to these numbers because of limitations of the storage. 0.00000......1 becomes 0 thus the real result of 1/0 x 0 in that case can indeed be literally everything. The beautiful thing about JavaScript is how it continues to handle this, when I say 1 < NaN or 1 > NaN it stays false. So the algorithms are likely to fail much more graceful than in other languages.


JavaScript’s behaviour is just how floats work. They are the same in all languages. There is storage reserved as part of their representation to encode infinity and NaN.

The issue here is hardware integers, which have no signalling bits beyond possibly sign.


Sure, JavaScript uses floats for everything. But the article seems to be actually mostly about real numbers: "The real numbers, along with our conventional notion of addition and multiplication, form a field." (Ignoring the fact that there are finite fields that are properly closed under all operations)


> These things are conventions, exactly the same as announcing that x^-n = 1/x^n and that x^0 = 0.

The convention that x^0 = 1 (including 0^0 = 1) is genuinely useful because it removes a case distinction from lots of combinatorial formulas. Hence this: http://tinyurl.com/zeropowerzero

x^0 = 0 seems to me to be - I won't say "wrong" because you're free to define things the way you like - in general less useful than x^0 = 1.

For similar reasons, (0 * log 0) = 0 makes sense when computing entropies and stuff in certain machine learning applications. This one can also be justified as the limit of x * log(x) as x -> 0 is 0, but I've seen at least one person just trying to define "log 0 = 0" which I consider less elegant; in the kind of application where this convention is useful, you'll rarely if ever see a log 0 that's not multiplied by a plain 0.

Where it gets really weird is when you do KL divergence and can end up with 0 * log (0/0), but again you can save a case distinction by just declaring this to be 0. The elegant way to do this is to define d(p, q) = p * log(p/q) when p, q != 0 and 0 otherwise. In this particular application, just going with 1/0 = 0 seems to work fine in practice as in a term q * log(q/0) you can rewrite the whole thing by flipping the sign to get 0 * log(0/q) in that term, which we have already declared to be 0 (whether or not q is 0 itself).

In this case it's not a bug in the code, it's being able to handle probability distributions with mass 0 on some points at the level of arithmetic rather than as a special case in the formula.


You lose simple properties of division, such as (a + b)/c = a/c + b/c.

If you set 1/0 = 1 (as the author claimed causes no inconsistency), then (2 + 1)/0 = 2/0 + 1/0 is false

If you come to rely on division by zero behaving a certain way (as happens to all features/bugs of any language), then suddenly silly decisions about how to implement a formula can cause wildly different behavior. Good luck refactoring!


(a + b)/c = a/c + b/c for c ≠ 0. Otherwise it has no meaning.

Therefore (2 + 1)/0 = 2/0 + 1/0 is not even false, it just means nothing.

BTW, you didn't mention Infinity explicitly, but it would lead to the same inconsistency, if you think about it:

(2 - 1)/0 = 2/0 - 1/0

:)


If you define a/0 to be a value, then you can ask whether (2 + 1)/0 = 2/0 + 1/0. This is not meaningless since both sides are defined. If you define a/0 = 2, then the equation is false.

Using "infinity" (which is not how it's actually done in math) requires you to rule out the possibility of operations like infinity - infinity.


I had considered this as a possibility following much the same train of thought expressed by abakker but because it diverges from most common practice (and from mathematics) and because I was unsure whether anyone would actually want it, I had not really considered using it.

While reading the article and the comments I realized that it will work perfectly in my NISC processor [1]. I just need to update my specification.

To extend the simple first specification to include multiply is very simple. I just need to have a MUL register which, when written, multiplies the written value by the value in the accumulator with the low part of the result being left in the ACC and the high part in the MUL register.

Extending to include integer divide is a little more complicated. I will have to add a DIV register and a DIVEX register. Writing to DIV will divide the value in the ACC by the value written leaving the result in the ACC and the remainder in the DIV register. DIVEX will contain the address of the routine to call when division by zero is attempted which means that DIVEX must be loaded before division is used. If I specify that DIVEX is initialized to zero by the hardware and that DIVEX==0 means that divide by zero leaves zero in the ACC and the remainder (former contents of ACC) is left in the DIV register then 1/0 = 0 will be the default behavior with the ability to change that behavior simply by writing the address of the divide exception handler to the DIVEX register.

I leave it as an exercise for the reader to determine how to deal with this in high level languages. I can fully support it in machine code and assembly language.

[1] https://github.com/BillBohan/NISC


I’m not sure if a general system should deviate from established definitions but in specific programs I certainly bend the rules.

For instance, I made a game, and I might accidentally end up with a value that won’t modulus correctly due to zero denominator. In that app, mathematical accuracy for this single outlier does me no good at all because it manifests as “might crash randomly” when I want “doesn’t crash, period”. I therefore wrote a “safe modulus” routine that essentially guards against zero and makes a command decision to return a value. The added robustness against crashes is preferable, since I may not know if I have found every stupid case of the math accidentally working out to exactly zero.

In fact, more generally, exact zeroes mask lots of bugs. They are often default initial values, meaning you might not notice something working accidentally. Sometimes I use a really tiny float as my “meant to be zero” to distinguish, e.g. 0.0001 means “item was supposed to appear at coordinate 0”, that way anything that accidentally ended up at 0 is easier to detect.


The standard definition of r = p/q is the number that solves the equation rq = p. This version of "division" does not satisfy that definition, and is therefore not actual division. Call it division* or something.

Edit: it seems that Pony is only doing this for integer "division", which isn't regular division anyway, and the standard rules don't apply.


I remember that 1/n is saying 1 with respect to n increments. 1/4 is 1 out of 4 buckets (of 1 integer). If you have 1/no buckets, it is an amount, but it is impossible to know how much. It's measuring a caterpillar without a unit of measure. "He's 7" "Ok, Im sure he is, but 7 what?" 1/0 may approach 0, but it is not 0.


But what does it give you? I may not like PHP's definition of "5uper" + 3 as 8, but at least I can see an advantage to it, a use for it. What would you do with that zero quotient, any examples? And I mean examples where you haven't checked the value of the divisor (otherwise what's the point). I appreciate spooneybarger's input as a core developer of Pony, because he lets us know this was a difficult decision, and frames the definition squarely in the context of the language.

On the other hand, the author of the article seems to push for a general, mathematical definition of 1/0 as 0. First of all, good luck with that, as there is no standards-issuing body in mathematics. And whether the author likes it or manages to overturn a very long tradition, division is defined when we grow up as multiplying by the multiplicative inverse, with some occasional notes like "Division by zero is undefined for the real numbers and most other contexts [1] or "In general, division by zero is not defined [2], pointing to settings where clearly you won't find consensus.

I refer to when we grow up because we should not forget the intuitive definition Wikipedia gives first and, I guess understandably, MathWorld gives last: "separating an object into two or more parts". As we know, this restriction of two parts can (more and then less) intuitively be loosened to one, negative numbers, rational numbers, real numbers, etc., but only arbitrarily for zero. But then again, even if we unanimously agreed on one value, what does it give you?

[1] https://en.wikipedia.org/wiki/Division_(mathematics)

[2] http://mathworld.wolfram.com/Division.html


Well, it's a design choice that you can make for rational numbers and the restricted rationals with error values that we typically use in computer arithmetic. There's nothing wrong with it and as the article points out, even most proof assistants use this definition for rational division.

For real numbers though, division by zero will always diverge or be undefined, no matter how you represent them. All computations on real numbers are continuous and there is no continuous extension of division which includes 0 in the domain of the denominator.

One can get around this by switching to projective reals (essentially reals extended with a new element that's both +infinity and -infinity). This is no longer totally ordered and has some other problems, but it's actually a great choice for a lot of numerical computations. Yet, I've never seen something like this implemented in hardware, aside from John Gustafson's unum concept.


Once a mathematician explained to me in mind-numbing detail why dividing by zero is impossible, so I understand why this is gross better than I ever wanted to.

But having said that, I do not remember when I last saw a Divide-by-Zero-error in the wild. In practical terms, this is the last kind of bug I worry about.


1/0 = 1(±∞)

https://twitter.com/westurner/status/960508624849244160

> How many times does zero go into any number? Infinity. [...]

> How many times does zero go into zero? infinity^2?


Zero goes into zero x times, for any real x. Infinity isn't real, therefore neither is infinity^2 so no.


Extrapolate.

What value does 1/x approach?

What about 2/x?

And then, what about ∞/x? What value would we expect that to approach? ∞(±∞)


It doesn't approach anything, it's unbounded.


I've used this workaround in genetic algorithms, as has John Koza, the pioneer of the field.

The idea behind this is you're never going to get the computer to program itself with exceptions, so better not to let it produce code it will not be able to execute fully without human intervention.


In some languages, you could define a new division operator that returns an Optional/Option/Maybe result, and then use a nil-coalescing operator to choose what you want in case of division-by-zero. In Swift:

    infix operator /? : MultiplicationPrecedence
    extension FloatingPoint {
        static func /? (lhs: Self, rhs: Self) -> Self? {
            return rhs == 0 ? nil : lhs / rhs
        }
    }
    extension BinaryInteger {
        static func /? (lhs: Self, rhs: Self) -> Self? {
            return rhs == 0 ? nil : lhs / rhs
        }
    }

    10 /? 2 ?? 0  // -> 5
    10 /? 0 ?? 0  // -> 0
    10 /? 0 ?? 10 // -> 10


You can't define your own in Pony but,

/?, *?, +? and -? are coming to Pony. We call them "safe math operators" and will error on integer overflow, underflow, and division by zero.


I consider it infinity because the limit as the denominator goes to 0 is infinity.

I see this with investments that have infinity ROI. If you get into an investment using only other people's money (OPM) and you make a profit then you have made x / 0. Saying I made an infinity return makes more logical sense than I made 0 return.

There's a saying about the difference between a mathematician and an engineer. If there is a $100 bill on a table and each step you take must be half the remaining distance or less the mathematician will never get there, but the engineer will get there no problem.

It's a non-sensical debate IMO but from the investing scenario that I described above, for all practical purposes, it makes more sense to consider it infinity.


> I consider it infinity because the limit as the denominator goes to 0 is infinity.

Note that this only works if the denominator approaches 0 from a positive side. Otherwise it's 0. This is the idea behind the extended real numbers.


FWIW the EEL2 language (part of https://www.cockos.com/wdl/ and what powers REAPER's JSFX audio, video processors, etc) also makes this design decision...


To be honest, this is the kind of post I'd write after a few beers when I'm not giving a shit and I think people have the right idea about something a lot of people would find controversial. Great post! You've made a real contribution here.


If something is undefined in math, it doesn't mean define it with whatever value you want: it means undefined because you can't define it.

The function sqrt(x) is undefined for negative x in the real number system. Is it therefore zero? No. Just because a value is not in a domain doesn't mean just pick one.

Most math textbooks I know and Wikipedia define division as the inverse of multiplication. You are saying that there is a distinction, the inverse of multiplication is not the same as division: you can write 1/0, but that is not the same as 1 * 0⁻.

This is unconventional to say the least, and the only argument you give to make this destinction is your desire to define 1/0 as 0.


I don’t really understand the main argument given in the article; since zero has no multiplicative inverse, you can’t prove anything involving a division by zero. (So why talk about field axioms?)

However, to extend an operation usefully to a larger set of numbers, we can isolate some properties that we want to keep, for example:

1. Division is linear in the first argument: (ab + c)/x = a(b/x) + c/x, for all a,b,c,x.

2. If x has a multiplicative inverse, then x * (1/x) = 1.

3. Taking reciprocals twice gets back to the original number: 1/(1/x) = x for all x.

The choice 1/x = 0 is consistent with all the above, and so probably still “useful”. What properties do you want out of a division function?


Any division either works and returns a result, or doesn't work and doesn't return a result. This is similar to any other function that maybe returns a result (returns an Option<T>/Maybe<T> or even Result<T, E>).

Perhaps it's not a good design to have division as an operator a programming language? Just like

Maybe<int> i = ParseInt("foo")

one should also do this for division:

Maybe<int> q = Div(n, d)

Is there any language that does this (returns an option for default division?)

Obviously, this is also the case for ALL other operations on finite number types because of overflow. But overflow is MUCH rarer than division by zero issues, at least in my experience.


That's interesting. In my career, I've rarely encounted division by zero. I've encountered over and underflow considerably more. Different experiences, different points of view.


As near as I can tell, he's just redefining what division means when the divisor is zero. That is, creating a special case. And yes, I get that it's a useful hack in programming.

So anyway, IANAM. What I learned was that 1/0 is infinity. And in software, that generally means a divide-by-zero error. But I also learned the utility of making approximations and exploring behavior at limits. And 1/x obviously increases exponentially as x approaches zero.

So how, then, can 1/x all of a sudden be zero when x is zero? It makes no sense to me. Call it undefined if you like. But it's obviously not zero.


1/x as x -> 0 is a famously tricky expression because your statement that it "obviously increases exponentially as x approaches zero" is only true half the time, because it's only true if you approach zero from the positive side, with x being set to smaller and smaller positive values. If, on the other hand, you set x to negative values of smaller and smaller size, 1/x actually decreases toward -Infinity. This is one of the several reasons that 1/x is typically considered undefined for the Reals.


Damn, so last night I was thinking about how 1/x approaches ∞ for positive x, and -∞ for negative x. And it struck me that perhaps beyond both ∞ and -∞ is 0, so 1/x might actually equal 0 when x is 0. Rather like closed "liner" paths in closed universes. But again, IANAM ;)


OK, sure. Either infinity or -infinity. But then just look at the absolute value of 1/x. And in any case, neither infinity nor -infinity look anything like zero.


I know nothing about pony, but if it does have an optional type, this would be a much better choice. You lose information by returning 0. Returning "none" would keep the information that you tried to divide by 0. Of course, this means all division would return an optional instead of a value. This is the cost of safe, robust programming.

<rant> I see a lot of software engineers trying to get rid of the cost of writing robust programs. It's not possible. Stop trying. You are either safe, or you are not. You either handle all cases, or you don't. </rant>


This is a case of poor abstraction. Many commenters in this thread explain why, in their application, 0 is a good fallback for division by zero.

But, falling back to 0 is not the correct error handling result for all situations. In other situations, an error is the correct result. The language designers should include proper division, and this style of division should be labeled a "safe" division; or, a division that throws an error should be labeled an "unsafe" division.

I have serious doubts about a language if it chooses one-size-fits-all error handling like this.


This item reminded me of how I've owned a couple different German cars with a fuel consumption meter that showed minimum consumption (or maximum mpg) when the car was idling. It seems like they must indeed have defined (fuel used ÷ distance traveled) as zero when the distance is zero. Which is annoying and grossly wrong in my opinion. For some reason, Japanese cars seem to do it correctly.

It may be true that 1/0=0 can be part of a reasonably consistent system, but that doesn't mean it's a good one. Can we define 1/0 as ∞?


I don’t think 1/0 = infinity is a good choice either, since -1/0 = 1/(-1 * 0) = 1/0, so we would have infinity being equal to negative infinity, which is assumedly not a practical choice. 1/0 = 0 is actually more symmetric in this regard.


While I fully agree, I'd like to point out that reducing specifically to 0/0 = 0 is more generally useful (ie. maintains intuition) than x/0 = 0, which breaks intuition.

Given a/b = c, your intuition of finding c is to get a number such that c×b = a.

c×0 = 0 has an infinite number of solutions, 0 being a particularly practical one.

c×0 = 1 (and more generally a≠0) has no solution, especially in the reduced explanation of multiplication as taking integer multiples of a value.

(Also note that having 0/0 = 0 maintains symmetry and continuity of the function y = 0/x.)


But this is not for man of industry!


It is stunningly practical.


there is a pretty good episode on numberphile that explains why dividing by zero is undefined.

Basically the reason why 1/0 = 0 is wrong is the fact that you cant reproduce 1 by taking the answer 0 and multiplying it by 0 to get the number being divided. Which goes on to break a bunch of other fundamental rules of mathematics.

That is to say, if you took 6 / 2 = 3 you can reverse the division by doing 3 * 2 = 6.

However if you have 6 / 0 = 0 and you were to try and reverse it then you would have 0 * 0 != 6. Then you have bizarre circumstance where everything decided by zero logically equals the same thing. Where (2 + 2) / 0 all of a sudden equals (Einsteins laws of gravity) / 0. And there is no logical way to really explain what that is supposed to mean.

Also if you try to use limits to find a solution of f(x) = 1/x where x is defined as the limit of 1 and -1 as it approaches 0 from both sides. You end up with the answer that is even more confusing when you try to graph it.

That is x = +inf and -inf. Which on a graph is represented as two curves where the limit as 1 -> 0 from the positive side of the x axis curves up along the y axis to infinity without intersecting where x=0. And from the negative side where we approach 0 from -1 we end up with a curve that moves down the -y axis without intersecting at 0.

So frankly, n / 0 = 0 just doesn't make sense because just trying to approach it from both sides suggests that the closer you try to move to zero from both sides the further apart the answer moves away from converging together at the origin. Which would be expected if n / 0 = 0 was actually true.


I'm writing a software language that has 1/0 = 0 as well. In my case, I am experimenting with several ideas, two of which are no runtime errors and no exceptions, so everything is either a logic error (aided by multiple return values) or a compiler error. It works fine in normal scenarios, but dividing by zero can't be caught at compile time, and operators can only return a single value, so I had to return something.


It is the average of 1/0+ and 1/0-

So at least that's something


> So at least that's something

It's also nothing. :-)


Sometimes I have a feeling that I should be avoiding division in all cases to begin with, but then, there are cases where it seems all too indispensable. Especially when a value is a measure with an ignored error, division close to zero is almost meaningless. Is it a realistic goal to try and avoid division in the first place? I don't remember my numerical computing course well enough


I have lower level question about how Pony implements this behavior:

When a CPU is asked to divide by zero it generates an exception. This exception would cause the OS to jump to the interrupt handler in the IDT for divide by zero exceptions. This handler generally results in the OS terminating the process. Does the Pony runtime register its own signal handler with the OS for a divide by zero exception?


An alternative to multiplicative inverse is to think of division x/y as allocating an amount x to y recipients; this is a very common operation in finance, eg allocating x shares to y accounts, or if you prefer, splitting a bill x among y diners. In this interpretation, you can't allocate anything to 0 recipients, so its reasonable and convenient to set 1/0 = 0


Ruby defines division by zero for Floats

  1.0 / 0
  => Float::INFINITY
and

  0 * Float::INFINITY
  => Float::NAN
I think infinity is a more intuitive result, but most of the time i get this as a user i would rather see 0 (bought books per month: Infinity). As a programmer i like to get an error, because i forgot to handle an edge case...



AFAIK Javascript does this as well.


1/0=0 is basically crack for the HN crowed, I knew it as soon as I saw it, just 5 little characters would drive people into the mouth of madness.

It's like WWE Raw Smackdown, Hulk Hogan vs. The Rock and everyone has to take a side.

And it's literally Friday night.

Sorry we're not supposed to comment like this buy y'all (I suppose 'we') are a really funny bunch sometimes.


The article uses two example theorems to justify this: a * (b/c) = b * (a/c), which can be extended to cover the c=0 case, and a/a = 1, which can be extended to cover the a=0 case. I can see how this can be useful in some situations, but it's not natural in the same way as e.g. analytical continuations of complex functions are.


In case you're wondering, Elm has a strict "no runtime errors" rule, but avoids this by not having integers.

Another interesting example is array access. What should a[len(a)+1] be if your language doesn't have runtime errors?

Elm returns a Maybe type. However, this would be very awkward if you're actually doing a lot of calculations using arrays.

[Edit: Pony is different.]


Pony does have exceptions, and indexing an array does raise if the index is out of bound. It doen't have runtime errors because all exception must be handled somewhere in the call stack, the sooner, the better.

Because of this rule, defining / as a partial function forces the user to fence every single division in a try clause.

Using boxed integer as Elm does is not an option as Pony targets high-speed computing, not browser rendering. Different fields, different trade-offs.

Defensive users can well define `div(I32, I32): I32?` that throws an error and use it instead of `/`.

As repeated in this thread, 1/0 is undefined because nonsensical. So, it's OK to have any sane, consistent and well documented behaviour. At least with Pony, the user can choose speed and convenience when they're sure there is no possible 0 at this place. Well, TBH, I think it's always the case. Because there is no such thing as a division by zero, you're algorithm is simply wrong and unsound if it ends up dividing by 0, and that's where this case differs from integer overflow

Note: in Pony literature, "partial" means "can throw an error". "partial" as in "doesn't apply to the whole domain of the arguments types"


Julia gives "Inf" and Inf+Inf =Inf and 1/Inf = 0.0

I prefer this because if you have a function converging on 0 then you will see growth towards Inf - then Inf - rather than growth followed by a cliff; which could then propagate into all your other calculations and move you from the very small to the very large in an instant.


Sometimes I wonder how much actual energy is expanded by humanity in such 'debates' (electricity, coal, nuclear..). I'm just that kind of person that likes to put thing into perspective. I'm certainly as 'guilty' as the next person in engaging in such trifle matters :-)


This triggered an immediate "that is wrong!" response from me, which I had to suppress to actually read the argument.

From this definition, I'd guess that pony isn't optimized for numerical code and floating point operations, because there it tends to make less sense.


Please note 1/0 = 0 in pony is only on integer math.

1.0 / 0.0 = infinity per the standard.

1/0 = undefined behavior and is left to language implementers to decide how to handle.


Meanwhile, in Python v3.7.0...

      >>> 1 / 0
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
      ZeroDivisionError: division by zero
      >>> 0 / 0
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
      ZeroDivisionError: division by zero
      >>> 1 / 0.0
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
      ZeroDivisionError: float division by zero
      >>> 0.0 / 0.0
      Traceback (most recent call last):
        File "<stdin>", line 1, in <module>
      ZeroDivisionError: float division by zero
Meanwhile, in Elixir v1.7.1...

      iex(1)> 1 / 0
      ** (ArithmeticError) bad argument in arithmetic expression: 1 / 0
          :erlang./(1, 0)
      iex(1)> 0 / 0
      ** (ArithmeticError) bad argument in arithmetic expression: 0 / 0
          :erlang./(0, 0)
      iex(1)> 1.0 / 0.0
      ** (ArithmeticError) bad argument in arithmetic expression: 1.0 / 0.0
          :erlang./(1.0, 0.0)
      iex(1)> 0.0 / 0.0
      ** (ArithmeticError) bad argument in arithmetic expression: 0.0 / 0.0
          :erlang./(0.0, 0.0)

Meanwhile, in Ruby v2.5.1...

      irb(main):001:0> 1 / 0
      Traceback (most recent call last):
              3: from /Users/mpope/.rbenv/versions/2.5.1/bin/irb:11:in `<main>'
              2: from (irb):1
              1: from (irb):1:in `/'
      ZeroDivisionError (divided by 0)
      irb(main):002:0> 0 / 0
      Traceback (most recent call last):
              3: from /Users/mpope/.rbenv/versions/2.5.1/bin/irb:11:in `<main>'
              2: from (irb):2
              1: from (irb):2:in `/'
      ZeroDivisionError (divided by 0)
      irb(main):003:0> 1 / 0.0
      => Infinity
      irb(main):004:0> 0.0 / 0.0
      => NaN
Meanwhile, in PHP v7.0.8...

      > echo 1 / 0;
      >
      INF
      Division by zero :1
      > echo 0 / 0;
      >
      NAN
      Division by zero :1
      > echo 1 / 0.0;
      >
      INF
      Division by zero :1
      > echo 0.0 / 0.0;
      >
      NAN
      Division by zero :1
Meanwhile, in Node v10.7.0...

      Infinity
      > 1 / 0
      Infinity
      > 0 / 0
      NaN
      > 1 / 0.0
      Infinity
      > 0 / 0.0
      NaN


Meanwhile, in Python...

  In [1]: 255 + 1 is 256
  Out[1]: True
  In [2]: 256 + 1 is 257
  Out[2]: False
Every language has its inconsistencies, when it comes to math, because they run on real-world computers. All these are "slow" interpreted languages that allow runtime failures and don't use plain machine integers. Just a different trade-off.

Anyway, the Inf/NaN trick is not better than the 0 trick in my opinion, because all subsequent operations result happily in NaN. I've never seen in any NaN language a program that checks for NaN the result of each and every division.

with a NaN-kind of language, it would be:

  a = b/c
  if a == NaN:
    print "oops"
with a 0-trick language:

  a = b/c
  if a == 0 and c == 0:
    print "oops"
not THAT different, IMO :)


When I make a language, there will be a OuchMyHeadHurtsCommaSpaceGotSomeAspirinQuestionMarkError.


Just yesterday I had a 1/0 bug (code iterating along some line segments that forgot to account for the possibility of a zero-length segment). If 1/0 was 0, the code would have worked correctly instead of crashing.

So, there's one data point in favor. :-)


code iterating along some line segments that forgot to account for the possibility of a zero-length segment

Out of curiosity, could you describe what you were doing and why 1/0 = 0 would have returned the correct result?


Hah, you know what, looking more closely at the code, the bug was not what I thought, and while the code doesn't crash now, it will still do the wrong thing!

So the code was given a path described by line segments p0-p1-p2-...pN, and what it wanted to do was find the point s distance along the path.

It's iterating over pairs of points (pI, pJ) and looking at the segment length. Is the segment length > the remaining length? If so, then we know our point is somewhere on the segment (pI, pJ). In particular, it is (remaining length) divided (segment length) pct along the segment.

This code was dying on zero-length segments (i.e., a repeated point) because of the divsion above. I wasn't thinking too clearly and fixed the problem by just skipping 0 length segments.

But actually, the only way that code can even trigger is if the original distance along the path was negative, in which case we actually just want to always return the start of the path.

So, in summary, my particular case was buggy either way, and 1/0 behavior doesn't hurt or help (except it caused a crash instead of an incorrect line in some cases; not really sure if that's better or worse).

For the case where we do have a zero-length segment, 1/0 = 0 would give desired behavior (just scale to first point in segment). (But of course, when it's a zero length segment, any value would be OK, since it would just get multiplied by zero again.)


I think a principled mathematical way to say it is that the multiplicative structure of a field is a group object in the monoidal category of pointed sets and the smash product (with 0 as the point). So the multiplicative inverse of 0 really is 0.


So I believe in this article the equation:

y = 1/x

will result in 2 nice asymptotes with a dot in the middle (0,0)

1/0 = 0 doesn't make sense to me, even because lim x->0+ 1/x is infinite and lim x->0- 1/x is -infinite

so the value that makes more sense for 1/0 is infinite


Why infinity rather than negative infinity? Is 1/0 = 1/(-0)?


Doug Crockford argues the same thing in his talk about the future of programming languages:

The Post JavaScript Apocalypse https://youtu.be/99Zacm7SsWQ?t=2434


Please forgive my tone, it's not keeping in the noble spirit of Hacker News. But here goes:

So that's it. I read your proof. You try cloaking it in pseudo-mathematical formulation, but what you are really doing is defining division as a derived operator, rather than the axiomatic operator it is in mathematics. I can say 1= infinity in my universe and as long as I am consistent, it is sound.

You're saying this is a useful property to have, and certainly that holds merit in certain conditions, but it is not a correct property.

Your article title should be clarified to state "1/0=0 (in my universe)." But then it wouldn't be controversial would it? After all, this is just an experiment to gain some visibility for Pony, isn't it?


The OP didn't invent the definition of fields as defined by additional and multiplication, with division defined as the Inverse of multiplication. Even the Wikipedia article on fields (https://en.wikipedia.org/wiki/Field_%28mathematics%29#Defini...) defines them this way. (Not that Wikipedia is an authority on truth, but I find that it's a pretty good indicator of what positions are common/widespread regarding a subject.) It's a bit silly to claim that division is defined as axiomatic "in mathematics" given how fundamental fields, defined as the OP defined them, are to modern number theory and mathematics in general.


Nice try, but sorry.

In your link, division is defined as the inverse of multiplication. However, the inverse of multiplication is one of the axioms of the field, so by transitivity, division is an axiom as well (it's simply a convenient label for "the inverse of multiplication"). One can put the name "Division" to some other axiom or derived property, absolutely, but then "the inverse of multiplication" still needs to be dealt with. The author, if you note, conveniently omits the multiplicative inverse from their stated axioms.

I'm far from a pure mathematician, but even I can see through the facade, implying that it's pretty thin.


My math is rusty but 1 / 0 := 0 implies that 0 * 0 := 1. This contradicts the definition of a binary field [1], let alone a field of real numbers.

[1] https://en.wikipedia.org/wiki/GF(2)

> "It is totally fine to define 1/0 = 0."

No, it's not, at least not useful. If you define it that way, you will not have a field. Then you don't have +, -, *, and / operations with the commonly assumed behavior. However, it is definitely possible to define some operation such that 1 `op` 0 := 0, just not the inverse operation of multiplication.


> However, it is definitely possible to define some operation such that 1 `op` 0 := 0, just not the inverse operation of multiplication.

If you read the post this is actually exactly what the OP is saying.


Sure, the author spent a long blog essay and didn't get the point across as clearly as many of the comments here on HN. If this were to be a research paper it would likely get rejected without a review. Not to mention the whole point of the idea is trivial. Why reinvent a wheel that traps everyone into the author's whimsical thinking when there is a standard way of doing this thing (1 / 0 => raise ZeroDivisionError)?

I, for one, will respectfully decline to use any programming language that defines 1/0 to be 0.


Pony lang says 1/0=0

I say neigh (sorry if you didn't catch that joke, I'm a little horse).

If x/0 =0 then, since division is inverse of multiplication x= 0*0 this fails to hold for any x not 0

But the story says there is no defined multiplicative inverse of zero (undefined) so that they are free to choose any value they want. And they choose 0.

This is unsatisfactory to me because it seems to break the intuitive pattern of x/n getting larger as n approaches zero from the positives. 1/4 .25< 1/3 .33 < 1/2 .5 < 1/1 1 >1/0 0 >1/-1 -1 < 1/-2 -.5 < 1/-3 -.333

You see that sign flip thingy happening around 1/0?

Maybe I am misunderstanding.


> But is Pony doing something unsound? Absolutely not. It is totally fine to define 1/0 = 0. Nothing breaks and you can’t prove something false. Everybody who was making fun of Pony programmers for being ‘bad at math’ doesn’t actually understand the math behind it.

This is playing semantic games. A whole lot does break: / no longer has its usual properties, and if you use those usual properties you can certainly prove false things. It's no different from saying that it's totally fine to define 2 + 2 = 5 and nothing breaks, you then just have to say that 5 isn't the arithmetic successor of 4 any more.


What usual properties does 1 / 0 = 0 break?


That dividing by a number means a multiplicative inverse for that number exists. That a * (b / c) is always equal to b * (a / c). All the things the article goes through.


It is pretty annoying when you see NaN on a web page. I personally have fallen into the division by zero trap on numerous occasions.


If thing is zero don't divide, like the base case in a recursive function. This is the maths version of null pointer exception


1/0 is Infinity, to keep consistency with beautiness of Geometry. Consider for example the stereographic projection.


A trick to ensure you don't divide by zero in C with minimal overhead(that converts zeros to 1) x/(y+(y==0))


One thing this all seems to miss that this is about INTEGER division. So any field argument does not apply?!?


<deleted>

Thought of another reason besides below comment why I was wrong, but thanks goes to below comment too!


Because it has the unfortunate consequence of eliminating the uniqueness property of fields.

If you allow 0 to have a multiplicative inverse 1/0, then the product of 0 and 1/0 must be equal to the multiplicative identity, which is 1. Then it follows that 0 = 0 * 0 = 1. From here it's straightforward to prove that x = y for all x, y in the field F.

A multiplicative inverse is a division. You cannot define division by x in fields without defining multiplication by 1/x and vice versa. The article is plainly incorrect in its attempt to refute this point (and in any case, shouldn't be using field theory to justify its purpose in the first place).


1 / 1 = 1 1 / 0.1 = 10 1 / 0.01 = 100 1 / 0.001 = 1000 ....


The convention is consistent. I don't find it useful, but it is consistent.


If you have a cake and divided it in zero parts, where did the cake go ?


I.e. a short-circuit means current = 0 because it trips a fuse?


You know.. dividing is like breaking something up.. If you divide by 2, you break it up into 2 pieces.. if you divide by 0, from some standpoints, you do have nothing, because you broke it up into 0 pieces, and 0 pieces is 0.


> You know.. dividing is like breaking something up.. If you divide by 2, you break it up into 2 pieces.. if you divide by 0, from some standpoints, you do have nothing, because you broke it up into 0 pieces, and 0 pieces is 0.

And dividing by 0.5 is breaking into how many pieces exactly?


I guess its more flipped. How many pieces equal the whole, not how many pieces can the whole be broken up into. That's why you can't divide by 0, any number of nothing can't make a whole.

*disclamer: I have no math background whatso ever


The function "division" is just like any other function. It takes two inputs, and gives an output.

The reason it's a useful function is it has some properties:

y=1/x is continuous except where x==0. That is, as you approach 0 from either side, it grows to either +- infinity. The developers of this language want to make this function ==0 when x is 0.

You can do that, sure, but it doesn't change the fact that when you go from +0.0000001 to -0.0000001, you now have 2 non-continuous jumps instead of just one.


ah thanks for the explanation!


I love how people without an actual mathematical background make these sort of arguments - not because they're wrong but because it helps me check when I make those assumptions myself.

Multiplying by the square root of 2 is also a fun one since you need limits to show people that real numbers only make sense with the least upper bound theorem and limits and things they expect to work "intuitively" (like division or multiplication) only have approximate meaning when they use the common operations.


We are talking about machine integers here... So division is actually all about evenly dividing counts of things with the possibility that you might have some left over.

Reals are fundamentally different and are not found in things like computers and in general are of no practical use. For work with values (as opposed to counts) we use something called floating point.


The analogy actually holds - below 1 it reverses and you 'unbreak' your something (which is now halved) into a larger one.


I guess its more flipped. How many pieces equal the whole, not how many pieces can the whole be broken up into. That's why you can't divide by 0, any number of nothing can't make a whole.

So all that to say: We can't treat 0 like a number.

*disclaimer: I have no math background whatsoever, this is just a fun topic.


And as it follows, if you divide by 0.00000000001, you can fit a whole lot of pieces in your original value... so what should dividing by 0 (which is even smaller than 0.000000001) naturally do? 0, or +infinity?


> if you divide by 0, from some standpoints, you do have nothing, because you broke it up into 0 pieces, and 0 pieces is 0.

This explains exactly why we shouldn't divide by 0. The pieces after breakage should, if one is so inclined, be able to be re-assembled into the whole; but, once you break something into 0 pieces, it is gone. Thus at best `0/0` could be defined … but the problem there is not that there's no answer but that every answer is equally good.


  You know.. dividing is like breaking something up [..]
No. What it is 'like' is totally irrelevant and actively harmful to consider if you want to be able to get anything serious done with your programming language. This is not a place to start philosophizing. For consistent arithmetic it is important to know which axioms are fulfilled. Division needs to be a mathematically clearly defined operation.


I mean, you’re not wrong. Not right either (for technical reasons), but that kind of argument leads you to mathematical objects like the Cantor set: a set of line segments with zero total length. Mathematically, you have a bunch of objects; practically, you have nothing.


1/0 = 0 is just plain lazy. So is 0^0 = 1. Now some might argue it is a matter of how you define the operators. What I say is that then separate them cleanly. e.g. pow(0,0) = 1 powr(0,0) = NaN


Some symbol is failing to render on my android phone.


1/0 = #DIV/0! (Sorry couldn't resist)


1/0 = +infinity; 1/+infinity = 0; simple.


Except machine integer types do not have infinities in them. So not so simple in the real world.


It's like in Ruby, when you divide 3/2 you get 1 (integer division) which is almost never the behavior you want (unless you cast all you numbers to float which is a pain to do).


> It's like in Ruby, when you divide 3/2 you get 1 (integer division) which is almost never the behavior you want

I dunno, if I'm dividing a quantized space, which is a fairly common thing to do, its exactly what I want.

> (unless you cast all you numbers to float which is a pain to do).

Using floats just gets you different wrong answers than using integers, unless you happened to somehow have ints to start and want a floating point approximation, which I find is less likely than wanting integer division.

What you really want for exact division where there are integers involved is to have at least one operand specified as a rational; this also takes less keystrokes, if its a literal, than changing it to a float to get an inexact answer.

Related to this, prior to Ruby 2.5, the standard library include "mathn", a library which monkey-patched all the numeric types so that operations on them acted pretty much like Ruby had a Scheme-style numeric tower, but since that has process-wide effects not restricted to where the library is required, that was deprecated in 2.2 and removed in 2.5.


When dealing with integer quantities and together with modulo, it's pretty natural. n/d gives the integer result, n%d gives the rest. n/2 for example always gives you the "middle" element. And in e.g. a loop where you e.g. transmit elements in blocks, you handle full blocks n/blocksize times, followed by a partial block with n%blocksize elements.

It's not always what you want, but it's pretty reasonable. The alternative would be if it always rounds up, which seems weirder to me. I don't think rounding to the nearest integer makes sense with integer quantities at all, that's strictly a real number operation.


In Ruby there is % which gives you remainders.

When we first teach children about division we teach them that, "17/5 = 3 with a remainder of 2". It is only later that they learn about decimal points and learn to say that 3.4 is the answer.


> do not mock other programmers

Who is mocking anyone though?


What about for really large values of zero?


Screams into the calculus void!


What an absurdly dumb article.


So x%0 = x for all x?


The set of integers do not form a field so the argument is nonsense.


> The set of integers do not form a field

The set of integers represented by the integer types of pony can be treated as a field, since they have discrete ranges.


Oh, so 0*0=1 ?


> So it is _not_ a theorem that a * (b / 0) = b * (a / 0)

As the article explains, the whole issue about dividing by 0 is the lack of multiplicative inverse. If you define division for 0 as something other than multiplication by the multiplicative inverse, there is no longer an issue doing the division. Hope that helps.


Then it will be not math.


I'm curious what you mean by this: there is nothing "privileged" about the system of arithmetic that we usually use and this alternate definition.

The math that we all learn up through highschool is a system with a carefully chosen set of axioms that happen to be useful in most cases and has become the default system, but there isn't some intrinsic property of it that makes it "math" and alternate consistent set of axioms "not math".


If a hn title said 1*0=0 could you reply "Oh, so 1=0/0?"

The entire point of the article was suggesting you can define a consistent system without multiplication and division being entirely symmetrical (as they already are not)


Actually, that's the entire point of 0/0. It's indeterminate, meaning it can only take on a fixed value in the context of an operation to approach it (usually a limit).


I kind of offended you personally or what? The entire point of this article is against the math and it means it's a bullshit.


> without multiplication and division being entirely symmetrical (as they already are not)

What do you mean?


Multiplication by x and division by x are inverses, except there is already the sole special case of x=0 where that isn't true.

The common way to resolve that is and be consistent is to axiomatically decide:

1) there is no inverse of * 0

2) 0 is not part of the range permitted for 1/x

The point of the article is that another way to resolve it and be consistent is to axiomatically decide:

1) there is no inverse of * 0

2) 0 is part of the range permitted for 1/x, and the resulting value is 0.

As in, (x/y) * y=x is true either way only if y!=0. The only difference is whether when y=0 if it is not true because it is just illegal or if it is not true because (x/0) * 0 = 0 for all x.


> As in, (x/y) * y=x is true either way only if y!=0. The only difference is whether when y=0 if it is not true because it is just illegal or if it is not true because (x/0) * 0 = 0 for all x.

Sure, I understand this claim. What I don't understand is the precise meaning of the claim "multiplication and division already are not entirely symmetrical." I don't know any existing technical meaning of "entirely symmetrical" for a pair of operations, but let's suppose that I switch the operations in your statement:

> (x/y) * y = x is true [for all x] only if y != 0.

Then I get:

> (x * y)/y = x is true [for all x] only if y != 0.

That's also true. What's the asymmetry?


isn't a pony an animal you ride? where's the math?


Hey look, /r/badmathematics is leaking


When setting out to add a new axiom to a heavily studied area of mathematics, perhaps one should at least formulate an argument that isn't shown to be incomplete with the most basic wikipedia search.

https://en.wikipedia.org/wiki/Division_by_zero




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: