

0.3 - 0.2 == 0.2 - 0.1 is falsy. Which language gets this right? - factorialboy


======
mooism2
You should perhaps be using a rational number datatype instead of floating
point arithmetic.

It depends what calculations you're doing: if you need to use transcendental
functions then you're stuck with floating point, and you'll have to decide how
close two numbers need to be for you to consider them equal. n.b. _you_ need
to do this: no programming language can know whether false positives are more
or less harmful than false negatives in _your_ program.

------
informatimago
Indeed old programming languages get it right:

Fortran (the oldest still in use):

    
    
        $ cat sub.f
        C Prints 0.3-0.2 and 0.2-0.3
              PRINT 4, 0.3-0.2, 0.2-0.1
            4 FORMAT ('0.3-0.2=',F3.1,' 0.2-0.1=',F3.1)
              STOP 1
              END
        $ gfortran sub.f -o sub
        $ ./sub
        0.3-0.2=0.1 0.2-0.1=0.1
        STOP 1
        $ 
    

Lisp (the second oldest still in use):

    
    
        cl-user> (values (- 3/10 2/10) (- 2/10 1/10))
        1/10
        1/10
    

Cobol (the third oldest still in use):

    
    
        $ cat sub.cob
               IDENTIFICATION DIVISION.
               PROGRAM-ID. DIFFERENCE.
               ENVIRONMENT DIVISION.
               INPUT-OUTPUT SECTION.
               FILE-CONTROL.
               DATA DIVISION.
               FILE SECTION.
               WORKING-STORAGE SECTION.
               77 out32 pic 9.9.
               77 out21 pic 9.9.
               PROCEDURE DIVISION.
                   subtract 0.2 from 0.3 giving out32.
                   subtract 0.1 from 0.2 giving out21.
                   display '; 0.3 - 0.2 = ' out32 .
                   display '; 0.2 - 0.1 = ' out21 .
                   goback.
    
        $ cobc  -fixed -Wcolumn-overflow -Wparentheses  -x sub.cob
        $ ./sub
        ; 0.3 - 0.2 = 0.1
        ; 0.2 - 0.1 = 0.1
        $

~~~
zerohp
These code samples don't test equality. Printing the output probably rounds
off the difference.

------
anonymouz
As many others have pointed out, this is not really a matter of language but
of the data types used. In IEEE 754 floating point arithmetic, the two
expressions are not equal.

You're looking for a different data type, and many choices would give you the
desired equality: a decimal floating point representation, calculating with
rational numbers and interval arithmetic come to mind (technically, IEEE
floating point is a kind of interval arithmetic, but you have not much control
over the size of the intervals used and comparison does not do what you might
naively expect, i.e., checking whether the two intervals intersect).

I suppose you should look for a library like GMP or MPIR for whatever is your
preferred language. Most computer algebra systems (e.g., Sage) will also
provide with what you need in some way or another.

~~~
beagle3
But the APL family (APL, J, K) does get this right despite using IEEE 754
floating point - it has comparison tolerance built in.

And you would probably look at what it does with disgust, given that it breaks
transitivity of equality (that is, there are numbers such that a==b and b==c
but a<>c). However, in practice it does work very well. You can still run into
weird corners, but e.g.

    
    
        (3*(a/3.0))==a 
    

always holds true, unlike with C comparison semantics.

~~~
anonymouz
This seems to be a very sensible default way to handle the comparison,
assuming the programmer already knows about the problems with floating point
math. A programmer who does not know about that will probably trip up sooner
or later no matter how the language implements comparison.

So personally I would not see it as "right" (or "wrong"), simply because it is
a choice between many different methods with their own advantages and
disadvantages. I view this not as a problem a language can (or has to) solve,
but as one a programmer has to be aware of and has to solve _depending on his
particular application_.

------
shared4you
Of course, Mathematica gets it right :) As a general rule, never use == and !=
operators on floats or doubles.

------
Piskvorrr
It depends on what the meaning of the word "is" is.

In other words, floating point operations in binary computers can give
unintuitive results; welcome to the bizarre world of programming. This may be
a useful starting point for reading:
[http://stackoverflow.com/questions/1167691/ieee-floating-
poi...](http://stackoverflow.com/questions/1167691/ieee-floating-point-
pitfalls-introductory-manual) , or perhaps this:
[http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.ht...](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)

(Note that most programming languages have a prominent warning about this in
the documentation.)

------
d0m
IMHO, the _right_ thing to do would be to forbid the usage of == for
_decimals_ values.

    
    
      1.0 == 0.9 -> Throw err
      1.0 > 0.9 -> true
    

In most case, you don't want to use == on _double_ because it 's very
dangerous... it's way better to use (0.3 - 0.3000001) < 0.001 or something
similar. So, by forbidding the == on _doubles_ , you force the programmer to
use a safer way. In the same time, it's a _clean hack_ around this decimal
issue. As a bonus, a programmer who doesn't know binaries arithmetic might get
curious on why == isn't implemented on double and read on it.

------
shangaslammi
In Haskell, decimal literals can be inferred as any Fractional type (including
Rational, which is precise), but they default to Double if there is no context
that dictates otherwise.

    
    
        Prelude> 0.3 - 0.2 == 0.2 - (0.1 :: Rational)
        True
    

Above, we explicitly declare one of the literals as Rational, and the rest are
inferred as having the same type (since the standard library does not allow
you to do arithmetics or comparisons between discrepant types).

------
draegtun
Perl6 gets it right:

    
    
      $ perl6
      > (0.3 - 0.2) == (0.2 - 0.1)
      True
    

Because Perl6 use _Rationals_ by default:

    
    
      > (0.3 - 0.2).perl
      1/10
    

To get it to work in perl5 you need to use _bigrat_ pragma:

    
    
      $ re.pl
      > (0.3 - 0.2) == (0.2 - 0.1)
      
      > use bigrat;
      > (0.3 - 0.2) == (0.2 - 0.1)
      1
    

I think Clojure is another language that uses _Rationals_ by default.

------
vec
Hopefully none of them (at least none that use floating point arithmetic).
When you convert the equation to the exponent notation you get this:

(1.2x2^-2) - (1.6x2^-3) = (1.6x2^-3) - (1.6x2^-4)

which is pretty clearly false.
<http://www.h-schmidt.net/FloatConverter/IEEE754.html> has a pretty nice
conversion tool, if you're curious.

------
beagle3
APL, J and K/Kona all get it "right" - their default floating point comparison
is a "tolerant comparison", basically meaning that floating point numbers
compare equal even if they are slightly apart (a few ulps, configurable). It
does not extend to hashing (because tolerant equality does not actually have
equivalence classes), but is nevertheless an extremely useful behaviour.

------
CurtHagenlocher
I suspect COBOL got this "right"...

In C#, (0.3m - 0.2m) == (0.2m - 0.1m). The latest version of the C++ spec
allows for user-defined literals, so you could do something similar there
(with caveats). But as far as I know, there's no modern language in widespread
use that has decimal literals which default to base-ten representations.

~~~
rst
Some SQL implementations seem to. In Postgres:

    
    
      test=# select (0.3 - 0.2) - 0.1;
       ?column? 
      ----------
            0.0
      (1 row)
    

(Ruby gives me -2.77555756156289e-17)

------
darkxanthos
Oddly Javascript does. Probably more oddly I don't get why that's "right"

~~~
infinity
I have just tested it on a Windows Vista machine, it doesn't work in Internet
Explorer, Opera and Safari. Putting each side of the equation into an alert
box shows the difference.

------
S4M
I found once that 3.6 - 3*1.2 == 0 was false in C, and subsequently in all the
languages I could find.

I suspect it is due to the fact that those number have an infinite binary
expansion...

------
genwin
I checked in Go (golang). Verbatim, it's false. But if I save at least one
side of the equality to a variable, or cast at least one side to a float, it's
true.

------
Misiek
PHP (5.3):

    
    
      echo ((float)(string)(0.3 - 0.2) == (float)(string)(0.2 - 0.1)) ? 1 : 0;
      1

------
brghteyes
bc gets it right

    
    
      $ bc -l -q
      .3 - .2 == .2 - .1
      1

------
ehutch79
wait. why is that false? both come out to 0.1, which should be equal?

~~~
Breakthrough
In binary, you can't represent any of 0.1, 0.2, nor 0.3 with "proper"
fractions, as they are irrational in a radix-2 number system. Thus, when
actually performing the operation (0.3 - 0.2) - (0.2 - 0.1), the result may
actually be non-zero, although very small.

The best thing to do here would be to use an inline comparison function with a
threshold:

    
    
        inline bool is_equal(double a, double b, double threshold = 0.0000001f)
        {
            if ( abs(a-b) < threshold )
            {
                return true;
            }
            else
            {
                return false;
            }
        }
    

Unfortunately, compounding floating point operations can drastically reduce
calculation accuracy, leading to requiring a greater threshold value... Which
is why I highly recommend reading the article titled "What Every Computer
Scientist Should Know About Floating-Point Arithmetic" by David Goldberg
(circa 1991, available at
[http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.ht...](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)).

------
rprasad
Any language which uses floating points to represent these values rather than
fixed points.

C# and some versions of C use fixed-point representations for these values by
default (or correct for floating point errors internally, as with C#), and
return the correct answer of true, as .1 is equal to .1 in fixed-point
representations.

EDIT: False is only the correct answer when using floating point
representations, as the equation no longer boils down to .1 == .1, but
something like .999 == 1.0001, etc.

