
Why is it ok to divide by 0.0? - okaleniuk
https://wordsandbuttons.online/why_is_it_ok_to_divide_by_0_0.html
======
dvt
I've read this article twice now and I'm confused about its intended audience.

If it's meant to be read by programmers, it's basically all wrong. Per IEEE
754, dividing any number by ±0.0 will give you ±Infinity -- so I'm not exactly
sure how that's "ok." Also, the idea that it's "important to fit things into
some relatively small amount of digits" because of "the speed of light" is
just bizarre. It's like saying the reason we build bridges is because of
gravity.. I mean, yeah.. I guess? On the other hand, if it's for laypeople, it
does a terrible job of explaining much of anything. You'd need to have prior
knowledge of floating point numbers, scientific notation, division by zero,
etc.

Are people actually reading it or just upvoting on a whim?

------
GolDDranks
That's handwaving the "OKness of dividing by zero" – yeah, computers can only
do limited-precision arithmetic performantly. But by allowing overflow to
infinity, we are essentially making the relative error infinite instead of
bounded. After reading about Posit arithmetic (
[https://www.posithub.org/docs/Posits4.pdf](https://www.posithub.org/docs/Posits4.pdf)
) I think that IEEE 754 under- and overflows and multiple representations of
zero are a design mistake.

~~~
nwallin
What do posits do when you run out of bits? 32 is ... not very many bits.

~~~
GolDDranks
They round to non-infinite values (the maximum and minimum representable
finite values). This is not ideal, of course, but the relative error stays
bounded, unlike with rounding to infinity.

Additionally, they have a neat encoding that makes the size of the mantissa
and exponent fields variable. (The whole bitsize of the number is fixed at 32
or 64 bits though.) This allows to have "tapered" accuracy at super large and
extra tiny numbers, which allows for more gradual under- and overflow, and
buys more accuracy where most of the calculations tend to be done, around 1.

------
segfaultbuserr
This article has a speed-of-light theme, we may as well talk about it.

> _How far do you think light travels in one nanosecond given the optimal
> conditions? In one nanosecond, the light travels only about 30 centimeters.
> This means that if you want something computed in under 1 nanosecond, the
> signal starting your computation has to go through all the transistors that
> do the computation by the path that is only 30 centimeters in length._

In real circuits, light travels in a medium, not in free space or air. The
speed-of-light in a medium depends on its relative dielectric constant K of
the insulator material. For standard SiO2, it's approximately 4, giving a
speed of 0.5c, or only 15 centimeters per nanosecond (coincidentally, a
typical circuit board has roughly the same dielectric constant, even coax,
CAT-6, or fiber optics cables are not far off from it, 15 cm/s is an useful
number to remember).

In the history of process node improvements, various doping techniques have
been developed for decreasing (or increasing) the K for better performance.
The primary motivation of lowering the K is _not_ increasing the speed-of-
light, but to reduce parasitic capacitance, charging and discharging a
capacitor creates a much longer delay than the theoretical propagation delay
of c / sqrt(K).

[https://en.wikipedia.org/wiki/Low-%CE%BA_dielectric](https://en.wikipedia.org/wiki/Low-%CE%BA_dielectric)

Another area of research is optical interconnects.

[https://en.wikipedia.org/wiki/Optical_interconnect](https://en.wikipedia.org/wiki/Optical_interconnect)

------
chroem-
I really wish languages would stop following this antipattern of throwing
exceptions instead of allowing normal handling of Inf and NaN values. There
are some cases where I really do want to perform comparisons against a non-
finite value, or do arithmetic with one. Non-finite values are part of the
floating point spec, and partitioning off this part of FP functionality is
non-standard behavior.

~~~
FeepingCreature
It used to be the case that nans, infs, denormals, anything that wasn't a
normal float, were _really really_ slow to process. This seems to have
improved somewhat over the last decade, but it's easy to see why it soured
people on the concept of propagating nans - it's basically saying "there's a
value called the Slow, and it turns everything it touches Slow, and you should
let the Slow propagate throughout your data." No, I don't want the Slow to
propagate through my data, I want to catch it where it is created so I can fix
the Slow there and stop it from becoming an issue in the first place.

("Demons of Slow! Be thou bound by the secret power of Fast, whose sign is DAZ
and FTZ! By the command of fesetexcept(FE_ALL_EXCEPT) I shall expose thee!")

------
axaxs
My first math teacher taught us division by zero was infinity. He justified it
by showing division by 1, then fractions, and how the line goes up towards
infinity. Today this is seen as incorrect, but it still makes more sense to me
than other divide by zero explosions...

~~~
b0rsuk
Sounds nice at first, but if you poke some more you run into inconsistencies.
This 9 minute video by Eddie Woo shows some.

[https://www.youtube.com/watch?v=J2z5uzqxJNU](https://www.youtube.com/watch?v=J2z5uzqxJNU)

Spoiler: A major one goes like this. Suppose X/0 is infinity. Then:

1/0 = infinity

2/0 = infinity

Therefore, 1 = 2.

~~~
mamcx
I think this show something similar to this:

[https://en.wikipedia.org/wiki/Monty_Hall_problem](https://en.wikipedia.org/wiki/Monty_Hall_problem)

1 = 2 can't be evaluate in isolation, because in this case is 1 = 2 AFTER x/0
= infinity?

------
nwallin
Incidentally, this is also why +0.0 is distinct from -0.0. <+something> / +0.0
is +inf, but <+something> / -0.0 is -inf. And if you want to do a >/<
comparison afterwards, you need the +/-.

------
nafey
What happens when you divide 0 by 0 though?

~~~
perl4ever
NaN?

I have gotten "NaN" in dialog boxes where it was really inappropriate, on
occasion.

