

1.15 * 100 = 149.99999999999999 except in Ruby - kungfooguru

All I've tried (Haskell (GHC), Python, Erlang, Clojure, Ruby) except Ruby give 149.99999999999999. What IEEE floating point (double/single precision) standard is causing this :) and why doesn't it happen in Ruby?<p><pre><code>  $ ghic
  Prelude&#62; 1.15 * 100
  114.99999999999999

  $ python
  &#62;&#62;&#62; 1.15 * 100
  114.99999999999999

  $ erl
  1&#62; 1.15 * 100.
  114.99999999999999

  $ irb 
  &#62;&#62; 1.15 * 100
  =&#62; 115.0</code></pre>
======
acqq
It's all about the infinite numbers of bits that get lost as soon as the
decimal fraction is converted in binary. See
<http://babbage.cs.qc.edu/IEEE-754/>

Enter 1.15 and marvel how mantissa is represented:

1.0010011001100110011001100110011001100110011001100110

The whole FP number in hex is: 3ff2666666666666

if there were more bits, 1100 would still continue to repeat. But you have to
store that number in fixed number of bits. _Whichever fixed number of bits you
select, you'll miss the infinite piece of repeats!_ Modern CPUs and languages
represent the whole number in 8 bytes, using _binary_ base, taking a few bits
for the exponent. The above number is

    
    
      3ff2666666666666
    

So now you multiply that with decimal 100. The result is still a series of
repeats:

    
    
      405cbfffffffffff
    

Whereas exact 115 would be:

    
    
      405cc00000000000
    

Why do we get one bit difference? Because we started from the finite _binary_
representation of "1.15" that is not equivalent to your decimal "1.15".

If you don't want such things to happen, you should use:

<http://en.wikipedia.org/wiki/Decimal_floating_point>

We write only decimal representations, and such representation used internally
would always provide the "expected" results.

Currently no Intel processor supports such numbers in hardware, therefore such
numbers are seldom present in languages.

As far as I know only IBM processors have hardware implementation of such
numbers:

[http://www.ibm.com/developerworks/wikis/display/hpccentral/H...](http://www.ibm.com/developerworks/wikis/display/hpccentral/How+to+Leverage+Decimal+Floating-
Point+unit+on+POWER6+for+Linux)

Your Ruby executable just rounds the binary represented result before
displaying as decimal. It depends on the conversion libraries used and default
rounding limits.

------
spicyj

      $ irb
      >> 115.0 - 1.15 * 100
      => 1.4210854715202e-14
    

The result is the same; irb just prints the number to lower precision than the
other shells.

~~~
yesyes1788
how come that come ?

------
gus_massa
The problem is that the decimal numbers in IEEE-754 are approximated by
fractions where the denominator is a power of 2. So

    
    
      1.15 =~ 5179139571476070 / 2**52 
           =  5179139571476070 / 4503599627370496 
           =~ 1.1499999999999999111821580299875...
    

And then

    
    
      1.15*100 =~ (5179139571476070 / 2**52) * 100  
               =  5179139571476070 / 4503599627370496 *100 
               =~ 114.99999999999999111821580299875...
    

You can read a more detailed analysis of how .1 is represented in Python in:
<http://docs.python.org/tutorial/floatingpoint.html>

------
goshakkk
That happened to me in Ruby, too: <http://cl.ly/3x391Q1X2N3k3R030N03>

~~~
kungfooguru
Weird, I'm on

    
    
      $ irb --version
      irb 0.9.5(05/04/13)
    

and it gives 115.0: <http://i.imgur.com/gGLAd.png>

~~~
Jacquass12321

      C:\Projects>irb -v
      irb 0.9.6(09/06/30)
    
      C:\Projects>irb
      irb(main):001:0> 1.15*100
      => 114.99999999999999
    

I don't believe it's a standards issue so much as it is just a facet of
floating point math in binary. Since you can't actually store 1.15, you can
just store your series of fractions which sum up to something very close to
1.15, I don't have much knowledge of floating point but I'd assume it stops at
some value within epsilon of 1.15. When you multiply that value by 100 that
sub epsilon value is now large enough to no longer allow the language to
automatically round it for display.

    
    
      irb(main):020:0> 1.15*100 > 115-Float::EPSILON
      => false
      irb(main):021:0> 1.15*100 > 115-(Float::EPSILON*100)
      => true
    

I'm assuming older IRB just had a wider acceptance range than epsilon perhaps.

------
celias
The "mindless" paper from William Kahan has a nice discussion of roundoff
errors: <http://www.cs.berkeley.edu/~wkahan/Mindless.pdf> The ~wkahan path has
more papers.

------
clementi1800
Here's bc:

    
    
        bc 1.06
        Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
        This is free software with ABSOLUTELY NO WARRANTY.
        For details type `warranty'.
        1.15 * 100
        115.00

------
bromagosa
There are other languages where the result is the expected, like Smalltalk/X:
<http://i.imgur.com/5pKAt.png>

------
kodablah
ECMAScript has this as well as Java. Of course, using BigDecimal (or just
using Groovy) or using strictfp in Java prevents this issue but slows
performance.

------
yacin
sbcl does it right:

* (* 1.15 100) 115.0

------
clementi1800
And C#: <http://i.imgur.com/Eo2Vw.png>

------
signalsignal
FWIW, using CLISP 2.49 REPL

    
    
      [1]> (* 1.15 100)
      115.0

------
gesman
PHP 5.x gives 115 too :)

------
yesyes1788
how come this come ?

