
Subtracting large floating point numbers in different languages - wamatt
http://geocar.sdf1.org/numbers.html
======
kovrik
Isn't it an expected behaviour (at least for programmers) because of IEEE 754?

For example, in Racket this also gives 2.0, but if you are using exact numbers
(#e), then it gives exact and correct answer 1.0.

Same for Java (and any other language): if you are using Float/Double numbers,
then you are loosing precision. Use BigDecimal (or something similar for other
languages) if you want exact calculations.

~~~
kovrik
UPD: But interesting question though: if language X gives correct answer for
that example (without using special classes and functions), does that mean
that language X is not following IEEE 754 and does something else under the
hood, hence will be much slower?

Take, for example, Java: no one is using BigInteger/BigDecimal by default
because they are waaay much slower.

You can't have precision for free, can you?

~~~
MaulingMonkey
Some dynamically typed languages use hybrid approaches - e.g. Ruby has Fixnum
and Bignum for integers - but you're correct that additional precision isn't
free.

I don't know _how_ much slower, but in theory such hybrid approaches can allow
the common case to be just as fast or fast enough (tm) while still returning
the correct results for the uncommon case (depending on how much the runtime
or compiler can prove about the range of your numbers to enable the fast
path.)

EDIT: Poor phrasing...

------
ubernostrum
Alternate title: "How to get IEEE 754 right".

~~~
no_protocol
Short, but seems an appropriate response given the lack of explanation in the
linked article.

IEEE 754 double-precision (64-bit) can only represent even numbers in the
range 2^53 to 2^54.

Haskell lets us look inside them fairly easily:

    
    
      > exponent 9999999999999999.0
      54
      > significand 9999999999999999.0
      0.5551115123125783
      > exponent 9999999999999998.0
      54
      > significand 9999999999999998.0
      0.5551115123125782
      > 2^54 * 0.5551115123125783
      1.0e16
      > 2^54 * 0.5551115123125782
      9.999999999999998e15

~~~
ubernostrum
Python also makes it easy to see what's going on in the REPL:

    
    
        >>> 9999999999999999.0
        1e+16

------
Lazare
Kind confused what the point is. Is this a complaint that most languages use
IEEE 754 for non-integer numbers and he thinks they shouldn't, or a veiled dig
about how many programmers don't know this, or...?

The color coding of results suggests the author thinks that 2 is wrong and 1
is right, but he's going out of his way to specify floating point numbers, and
when subtracting those two floating point numbers the correct answer is 1 and
NOT 2.

Eg, Ruby thinks 9999999999999999.0 - 9999999999999998.0 = 2, but
9999999999999999 - 9999999999999998 = 1. Which is...correct. Right? Unless you
don't think IEE 754 should be the default?

I feel like the author is trying to make a clever point, but if so, I'm not
getting it.

------
ruricolist
Note that the Common Lisp example could be abbreviated to:

(- 9999999999999999.0l0 9999999999999998.0l0)

However, this will only work (either way) in Lisps where long floats are
arbitrary precision, like CLISP. In most Lisps (e.g. SBCL, CCL), long floats
are just double floats.

------
lightlyused
For java: BigDecimal b1 = new BigDecimal("9999999999999999.0");

    
    
            BigDecimal b2 = new BigDecimal("9999999999999998.0");
    
    
            System.out.println(b1.subtract(b2));

prints 1.0

~~~
cowsandmilk
Python:

    
    
        >>> import decimal
        >>> decimal.Decimal('9999999999999999.0')-decimal.Decimal('9999999999999998.0')
        Decimal('1.0')

~~~
ScottBurson
Or just

    
    
      >>> 9999999999999999 - 9999999999999998
      1

------
aftbit
How does Google get 0 out of that? Both my high school math teacher and IEEE
754 think that's wrong.

~~~
PDoyle
9999999999999999.0 can't be represented as a double. Depending on the rounding
mode, it could become 9999999999999998.0, so when you subtract them, they're
zero. That happens when the rounding mode is round toward zero, or round down.
Any other mode rounds it up to 1e+16, so you get an answer of 2.

------
ivan_ah
SymPy, being Python, gets it wrong... but if you use S = sympify to transform
the long numbers into SymPy objects it works:

    
    
        S('9999999999999999.0') - S('9999999999999998.0')
        1.0

~~~
flaviojuvenal
You can use Decimal built-in also:

    
    
      In [1]: from decimal import Decimal as D
      In [2]: D('9999999999999999.0') - D('9999999999999998.0')
      Out[2]: Decimal('1.0')
    

From docs: Decimal “is based on a floating-point model which was designed with
people in mind, and necessarily has a paramount guiding principle – computers
must provide an arithmetic that works in the same way as the arithmetic that
people learn at school.”

[https://docs.python.org/3.6/library/decimal.html](https://docs.python.org/3.6/library/decimal.html)

~~~
PDoyle
...until they divide by 3.

------
rurban
The perl5 solution is

    
    
        perl -Mbignum  -e 'print 9999999999999999.0-9999999999999998.0;print "\n";'
    
    1

------
oulu2006
MacOS search bar gives the correct result 1.

Because it evaluates the expression using the Calculator App.

------
CalChris
Wolfram Alpha gives 1. Bing gives 1.

~~~
mastre_
Bing gives 1 in full search, 2 in quick/suggested search:
[https://i.imgur.com/wCMueSD.png](https://i.imgur.com/wCMueSD.png)

