Hacker News new | past | comments | ask | show | jobs | submit login
Subtracting large floating point numbers in different languages (sdf1.org)
35 points by wamatt on Nov 8, 2016 | hide | past | web | favorite | 23 comments



Isn't it an expected behaviour (at least for programmers) because of IEEE 754?

For example, in Racket this also gives 2.0, but if you are using exact numbers (#e), then it gives exact and correct answer 1.0.

Same for Java (and any other language): if you are using Float/Double numbers, then you are loosing precision. Use BigDecimal (or something similar for other languages) if you want exact calculations.


If there's a problem here, it's that large integers are being treated as floating-point numbers with insufficient precision to represent them properly. With the exception of Perl 6 (?!), programming languages all seem to agree that putting a period in a number gives them a license to return answers that don't agree with normal arithmetic. If you think about it, that's an odd thing to be unanimous about.

Also, BigDecimal doesn't give "exact calculations" unless you stick to numbers with finite decimal expansions. BigDecimal can't do 1/3 without rounding.


UPD: But interesting question though: if language X gives correct answer for that example (without using special classes and functions), does that mean that language X is not following IEEE 754 and does something else under the hood, hence will be much slower?

Take, for example, Java: no one is using BigInteger/BigDecimal by default because they are waaay much slower.

You can't have precision for free, can you?


Some dynamically typed languages use hybrid approaches - e.g. Ruby has Fixnum and Bignum for integers - but you're correct that additional precision isn't free.

I don't know how much slower, but in theory such hybrid approaches can allow the common case to be just as fast or fast enough (tm) while still returning the correct results for the uncommon case (depending on how much the runtime or compiler can prove about the range of your numbers to enable the fast path.)

EDIT: Poor phrasing...


Yeah. And Decimal in Python would return the correct value too.


Alternate title: "How to get IEEE 754 right".


Short, but seems an appropriate response given the lack of explanation in the linked article.

IEEE 754 double-precision (64-bit) can only represent even numbers in the range 2^53 to 2^54.

Haskell lets us look inside them fairly easily:

  > exponent 9999999999999999.0
  54
  > significand 9999999999999999.0
  0.5551115123125783
  > exponent 9999999999999998.0
  54
  > significand 9999999999999998.0
  0.5551115123125782
  > 2^54 * 0.5551115123125783
  1.0e16
  > 2^54 * 0.5551115123125782
  9.999999999999998e15


Python also makes it easy to see what's going on in the REPL:

    >>> 9999999999999999.0
    1e+16


Kind confused what the point is. Is this a complaint that most languages use IEEE 754 for non-integer numbers and he thinks they shouldn't, or a veiled dig about how many programmers don't know this, or...?

The color coding of results suggests the author thinks that 2 is wrong and 1 is right, but he's going out of his way to specify floating point numbers, and when subtracting those two floating point numbers the correct answer is 1 and NOT 2.

Eg, Ruby thinks 9999999999999999.0 - 9999999999999998.0 = 2, but 9999999999999999 - 9999999999999998 = 1. Which is...correct. Right? Unless you don't think IEE 754 should be the default?

I feel like the author is trying to make a clever point, but if so, I'm not getting it.


Note that the Common Lisp example could be abbreviated to:

(- 9999999999999999.0l0 9999999999999998.0l0)

However, this will only work (either way) in Lisps where long floats are arbitrary precision, like CLISP. In most Lisps (e.g. SBCL, CCL), long floats are just double floats.


For java: BigDecimal b1 = new BigDecimal("9999999999999999.0");

        BigDecimal b2 = new BigDecimal("9999999999999998.0");


        System.out.println(b1.subtract(b2));
prints 1.0


Python:

    >>> import decimal
    >>> decimal.Decimal('9999999999999999.0')-decimal.Decimal('9999999999999998.0')
    Decimal('1.0')


Or just

  >>> 9999999999999999 - 9999999999999998
  1


How does Google get 0 out of that? Both my high school math teacher and IEEE 754 think that's wrong.


9999999999999999.0 can't be represented as a double. Depending on the rounding mode, it could become 9999999999999998.0, so when you subtract them, they're zero. That happens when the rounding mode is round toward zero, or round down. Any other mode rounds it up to 1e+16, so you get an answer of 2.


Maybe it is using single precision.


SymPy, being Python, gets it wrong... but if you use S = sympify to transform the long numbers into SymPy objects it works:

    S('9999999999999999.0') - S('9999999999999998.0')
    1.0


You can use Decimal built-in also:

  In [1]: from decimal import Decimal as D
  In [2]: D('9999999999999999.0') - D('9999999999999998.0')
  Out[2]: Decimal('1.0')
From docs: Decimal β€œis based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.”

https://docs.python.org/3.6/library/decimal.html


...until they divide by 3.


The perl5 solution is

    perl -Mbignum  -e 'print 9999999999999999.0-9999999999999998.0;print "\n";'

1


MacOS search bar gives the correct result 1.

Because it evaluates the expression using the Calculator App.


Wolfram Alpha gives 1. Bing gives 1.


Bing gives 1 in full search, 2 in quick/suggested search: https://i.imgur.com/wCMueSD.png




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: