Attmepting to just use simple equivalence with any floating point type a horrible idea to begin with, even without NaNs. You should instead declare equivalence if the absolute difference between the two numbers is less than some bound based on what you're doing and not look for bit equivalency.However, you should be able to examine two NaNs and declare them "equivalent" (for certain definitions of equivalence) by intelligently examining the bits based on the hardware that you're running the program on. In the case of a binary Nan [1] that would entail checking that the exponential fields are both entirely high (eg 0x8 == (a.exponent & b.exponent), assuming a standard 8 bit exponent) and that the mantissas are nonzero (eg a.mantissa && b.mantissa).[1]: "Binary format NaNs are represented with the exponential field filled with ones (like infinity values), and some non-zero number in the significand (to make them distinct from infinity values)." --http://en.wikipedia.org/wiki/NaN

 That's not true, there are plenty of cases where using equivalence is just fine. Integer arithmetic, and algorithms that are more reliably written not to contain any empty intervals are two examples.
 He was specifically talking about equivalence on floating point types. Integers don't have or need NaN.
 I was talking about integer values (with floating point representation) being multiplied and added (and divided and floored, I suppose).
 even just addition and multiplication with floats make simple equivalence a horrible idea, due to uncertainty.For example:`````` float a = 1.0; float b = 1000.0; for (int i = 0; i < 1000000; ++i) a+=1.0; b *= b; `````` There is no guarantee that a == b. Floats make everything more complicated, even simple addition: http://en.wikipedia.org/wiki/Kahan_summation_algorithm
 There is no uncertainty. There is a guarantee that a == b (if we ignore the off-by-one error in your post), because IEEE operations are guaranteed to be accurate within half an ulp. You can safely perform addition, subtraction, and multiplication, and truncated or floored division, within the 24-bit integer range for single-precision floats and the 53-bit integer range for doubles. This is why people can safely use integers in Javascript.
 i guess that's what a i get for not double checking my math. here's a revised version that (as long as i haven't made any other math mistakes) still fits within a 32 bit signed int but doesn't guarantee simple equality:`````` float a = 0.0; float b = 10000.0; for (int i = 0; i < 100000000; ++i) a+=1.0; b *= b; `````` why in the world would you use a float instead of an int for addition, subtraction, and multiplication, and truncated or floored division, within the 24-bit integer range? it seems like there's no benefit to offset the facts that floating point operations are slower than integer operations and that ints can store integers 7 or 8 bits larger.and what happens when you go beyond 24 bits? since it's a float no error or warning will be thrown, but now equivalence won't work for numbers that are easily stored by an int.
 Why aren't you capitalizing your sentences? Are you too lazy to write properly?Where did I say I'd use floating point numbers for integer math? Yes, let's move the conversation to a direction it never existed so that you can pretend you were right.(The place I'd use it would be in a Javascript implementation, or a Lua implementation, and other situations where I'm designing a programming language where I want the simplicity of having only one numerical type. And that would be a 53-bit range, not 24-bit.)
 You said using equivalence for floats that store integers is fine. here is a link: [1]. The point of my example was to show that that is not the case for numbers that are easily stored by an int that's the same size as a float.I've not used lua or javascript, but wouldn't it be better to choose an implmentation that silently switches between ints, floats, doubles and big-nums as needed? That way you don't limit the speed float and int operations by autoconverting up to doubles, when double precision isn't needed.
 I do not recommend using floating point numbers for integer math. I am saying that if you have integers stored in floating point representation, equality comparisons are fine.> I've not used lua or javascript, but wouldn't it be better to choose an implmentation that silently switches between ints, floats, doubles and big-nums as needed?If you want to go through the engineering effort, sure, it might make sense to have separate int and double encodings. You have to check for the possibility of integer overflow and convert to double in that case. In particular, this is useful if you want to use Javascript's bitwise operators. See https://developer.mozilla.org/en-US/docs/SpiderMonkey/Intern... for a description of how SpiderMonkey does it. Lua just has one representation for numbers, double normally but it could be float or long depending on how you compile it. There would be no reason to have single-precision floating point or big-num representations.
 If you mean arithmetic on floating point numbers that only store integers, then no equivalence is not ok, because there are examples where bit equivalence fails. One example is repeatedly adding 1.0 to a float something like 1 trillion times versus multiplying 1000000.0 by 1000000.0What do you mean by`` algorithms that are more reliably written not to contain any empty intervals are two examples.``
 Obviously I'm referring to integer operations within a 52-bit or 23-bit range, not outside the ability of floating point representations to represent integers!> What do you mean byI mean exactly that sort of thing. Algorithms that, for example, divide a line into intervals, where empty intervals are not needed, or desired, and are generally risky with respect to the probability of having a correct implementation. An example of this would be computations of the area of a union of rectangles. You might make a segment tree -- and avoid having degenerate rectangles in your segment tree.

Applications are open for YC Summer 2018

Search: