Hacker Newsnew | comments | show | ask | jobs | submit login

NaNs are annoying because, thanks to them, equality on floating point numbers is not an equivalence relation. In particular, NaN /= NaN.

This means that in Haskell, for example, you cannot really rely on the Eq class representing an equivalence relation. Code relying on the fact that x == x should be true for all x could not work as expected for floating point numbers.

I don't know if this has any practical ramifications in real code, but it certainly makes things less elegant and more complex than they have to be.




Attmepting to just use simple equivalence with any floating point type a horrible idea to begin with, even without NaNs. You should instead declare equivalence if the absolute difference between the two numbers is less than some bound based on what you're doing and not look for bit equivalency.

However, you should be able to examine two NaNs and declare them "equivalent" (for certain definitions of equivalence) by intelligently examining the bits based on the hardware that you're running the program on. In the case of a binary Nan [1] that would entail checking that the exponential fields are both entirely high (eg 0x8 == (a.exponent & b.exponent), assuming a standard 8 bit exponent) and that the mantissas are nonzero (eg a.mantissa && b.mantissa).

[1]: "Binary format NaNs are represented with the exponential field filled with ones (like infinity values), and some non-zero number in the significand (to make them distinct from infinity values)." --http://en.wikipedia.org/wiki/NaN

-----


That's not true, there are plenty of cases where using equivalence is just fine. Integer arithmetic, and algorithms that are more reliably written not to contain any empty intervals are two examples.

-----


He was specifically talking about equivalence on floating point types. Integers don't have or need NaN.

-----


I was talking about integer values (with floating point representation) being multiplied and added (and divided and floored, I suppose).

-----


even just addition and multiplication with floats make simple equivalence a horrible idea, due to uncertainty.

For example:

    float a = 1.0;
    float b = 1000.0;
    for (int i = 0; i < 1000000; ++i)
        a+=1.0;
    b *= b;
There is no guarantee that a == b. Floats make everything more complicated, even simple addition: http://en.wikipedia.org/wiki/Kahan_summation_algorithm

-----


There is no uncertainty. There is a guarantee that a == b (if we ignore the off-by-one error in your post), because IEEE operations are guaranteed to be accurate within half an ulp. You can safely perform addition, subtraction, and multiplication, and truncated or floored division, within the 24-bit integer range for single-precision floats and the 53-bit integer range for doubles. This is why people can safely use integers in Javascript.

-----


i guess that's what a i get for not double checking my math. here's a revised version that (as long as i haven't made any other math mistakes) still fits within a 32 bit signed int but doesn't guarantee simple equality:

    float a = 0.0;
    float b = 10000.0;
    for (int i = 0; i < 100000000; ++i)
        a+=1.0;
    b *= b;
why in the world would you use a float instead of an int for addition, subtraction, and multiplication, and truncated or floored division, within the 24-bit integer range? it seems like there's no benefit to offset the facts that floating point operations are slower than integer operations and that ints can store integers 7 or 8 bits larger.

and what happens when you go beyond 24 bits? since it's a float no error or warning will be thrown, but now equivalence won't work for numbers that are easily stored by an int.

-----


Why aren't you capitalizing your sentences? Are you too lazy to write properly?

Where did I say I'd use floating point numbers for integer math? Yes, let's move the conversation to a direction it never existed so that you can pretend you were right.

(The place I'd use it would be in a Javascript implementation, or a Lua implementation, and other situations where I'm designing a programming language where I want the simplicity of having only one numerical type. And that would be a 53-bit range, not 24-bit.)

-----


You said using equivalence for floats that store integers is fine. here is a link: [1]. The point of my example was to show that that is not the case for numbers that are easily stored by an int that's the same size as a float.

I've not used lua or javascript, but wouldn't it be better to choose an implmentation that silently switches between ints, floats, doubles and big-nums as needed? That way you don't limit the speed float and int operations by autoconverting up to doubles, when double precision isn't needed.

[1]: http://news.ycombinator.com/item?id=4400424

-----


I do not recommend using floating point numbers for integer math. I am saying that if you have integers stored in floating point representation, equality comparisons are fine.

> I've not used lua or javascript, but wouldn't it be better to choose an implmentation that silently switches between ints, floats, doubles and big-nums as needed?

If you want to go through the engineering effort, sure, it might make sense to have separate int and double encodings. You have to check for the possibility of integer overflow and convert to double in that case. In particular, this is useful if you want to use Javascript's bitwise operators. See https://developer.mozilla.org/en-US/docs/SpiderMonkey/Intern... for a description of how SpiderMonkey does it. Lua just has one representation for numbers, double normally but it could be float or long depending on how you compile it. There would be no reason to have single-precision floating point or big-num representations.

-----


If you mean arithmetic on floating point numbers that only store integers, then no equivalence is not ok, because there are examples where bit equivalence fails. One example is repeatedly adding 1.0 to a float something like 1 trillion times versus multiplying 1000000.0 by 1000000.0

What do you mean by

    algorithms that are more reliably written not to contain any empty intervals are two examples.

-----


Obviously I'm referring to integer operations within a 52-bit or 23-bit range, not outside the ability of floating point representations to represent integers!

> What do you mean by

I mean exactly that sort of thing. Algorithms that, for example, divide a line into intervals, where empty intervals are not needed, or desired, and are generally risky with respect to the probability of having a correct implementation. An example of this would be computations of the area of a union of rectangles. You might make a segment tree -- and avoid having degenerate rectangles in your segment tree.

-----


This looks very much like NULLs in the SQL world.

A null (and NaN) is like an unknown. One can't compare unknowns because they are exactly that, unknown.

Let's construct a language that on division on zero returns unknowns.

   a =  5 / 0;
   b = 10 / 0;
Now, both a and b are set to unknown state. If one were to compare a to b, should the expectation be that they hold same value?

I wish all languages would have nullability like SQL does. Where a great care has to be given to deal with nullable data, lest nulls nullify everything.

-----


In some languages, a, b would be +Infinity. On top of my head I can't remember whether +Infity != +Infinity (have to write a test and see). For NaN's definitely.

-----


One of those languages is JavaScript, so--assuming your browser supports JavaScript--you do have somewhere to test it :P.

The answer is that 5 / 0 is Infinity and (5 / 0) == (5 / 0), so it's all good :). Now, 0 / 0 is NaN and (0 / 0) != (0 / 0).

-----


Would you prefer NaN be another type, like Maybe? And have to all math in a monad or with chronically repeated case analysis?It's necessary complexity, whichever way you do it.

-----


Oh, I don't really have a good solution in mind--it's just annoying.

I guess one option would be to just declare that all NaNs are equal--I'm pretty sure that's how bottoms work in Haskell, and it seems that NaN is essentially a floating-point version of bottom.

-----


There are however multiple bit patterns for NaN which complicates that.

-----


Yes. I was actually thinking about this. However, you have a similar problem with the current model: two NaNs can have the same bit pattern, but still have to be unequal.

This is somewhat simpler--as long as either argument to (==) is NaN the answer is False. However, you still have to figure out that at least one bit pattern corresponds to NaN. If you want them to be equal, you would have to figure out that both are NaN. This is certainly a little more difficult, but I think it isn't much worse than the current scenario.

-----


I think though you currently just subtract them and test equality to zero, so it is all done by the hardware.

-----


If you try to compare something to bottom (undefined), your program will crash. Try it.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: