
0.30000000000000004 - cocoflunchy
http://0.30000000000000004.com/
======
Tepix
Just adding 0.1 to 0.2 in the language of your choice can be misleading
because printing the number might just cut off the digits at some point.

Here's an example using perl5:

perl -e 'print 0.1+0.2' yields '0.3', which looks OK, however

perl -e 'print 0.1+0.2-0.3' reveals '5.55111512312578e-17'

So, it's better to print the result of 0.1+0.2-0.3

For what its worth, perl6 passes this test with flying colours.

~~~
static_noise
> For what its worth, perl6 passes this test with flying colours.

Do we really expect calculations on floating point numbers to be the same as
calcualtions with real numbers?

The only problem I see is that often the floating point format is not
specified properly. When one writes "0.3", what does it actually mean? Does
the format the compiler uses internally conform to IEEE 754? Is it 32bit,
64bit, 80bit or something else?

~~~
panic
_The only problem I see is that often the floating point format is not
specified properly. When one writes "0.3", what does it actually mean?_

It's perfectly well-specified. If you are using double-precision floating
point, "0.3" means 0x3fd3333333333333 (as bits), or
0.299999999999999988897769753748434595763683319091796875 (as an exact number:
the closest representable number to 0.3).

~~~
static_noise
When your compiler does its internal calculations in double precision float
you are right.

But why should it do so? Do all languages specify the details of how floating
point input is handled by the compiler/interpreter?

~~~
gsnedders
_Most_ languages do (though implementation bugs are common—plenty of languages
where 64-bit precision is required actually end up being implemented in such a
way that 80-bit precision is used on x86).

That said, a lot of low-level languages don't: C doesn't guarantee this, for
example. C doesn't even mandate IEEE 754, and it merely requires that "float"
is a subset of the values of "double" which is itself a subset of the values
of "long double".

------
mehrdada
_" Computers can only natively store integers, so they need some way of
representing decimal numbers"_

Technically, [digital] computers only natively "store" high and low voltages.
The interpretation of a sequence of those signals as "integers" is entirely up
to you.

In all seriousness, representing integers is not a given either (e.g. two's
complement vs one's complement, BCD, bignum, sparse integers, etc.). If you
are going down in abstraction levels to describe floating point
representation, you should not just call integers "native" blindly.

Furthermore, floating point is not the only way to represent computation
involving real values. It is indeed possible to represent computation on real
values symbolically in many cases and lazily compute arbitrary precision
results for output-printing purposes.

~~~
illumen
Technically, much storage does not need voltage being high or low either. Pits
burned with lasers, and magnetic changes are also common ways to store stuff
that don't involve voltage in the stored state.

Here you can see python has a _fractions.Fraction_ type which can be used for
rational number arithmetic.

    
    
      >>> from fractions import Fraction
      >>> Fraction("0.1") + Fraction("0.2")
      Fraction(3, 10)
    
      >>> repr(float(Fraction("0.1") + Fraction("0.2")))
      '0.3'
    

Here we see the floating point behaviour.

    
    
      >>> 0.1 + 0.2
      0.30000000000000004
    

Here we see that python has a nice behaviour for when printing floats.

    
    
      >>> print(0.1 + 0.2)
      0.3
     
    

Here is a common error where people feed the input of Fraction (and Decimal)
with a float. Even though you use the Fraction here it still carries the
floating point error through.

    
    
      >>> Fraction(0.1) + Fraction(0.2)
      Fraction(10808639105689191, 36028797018963968)
      >>> float(Fraction(0.1) + Fraction(0.2))
      0.30000000000000004
    
    

Here is the Decimal type which does not have this float problem. Some people
have argued (including myself) that Decimal should be used by python instead
of float. Since this is much friendlier to people doing things like adding
these numbers together.

    
    
      >>> from decimal import Decimal
      >>> Decimal("0.1") + Decimal("0.2")
      Decimal('0.3')
      >>> float(Decimal("0.1") + Decimal("0.2"))
      0.3

~~~
jessaustin
_Here is the Decimal type which does not have this float problem._

Are you sure?

    
    
      >>> Decimal(0.1)
      Decimal('0.1000000000000000055511151231257827021181583404541015625')

~~~
karl42
use Decimal('0.1'), otherwise you will first generate a floating point number
(with accuracy loss) and the convert it to a decimal.

------
kazinator
Those languages which print 0.30000000000000004 are doing something which is
silly as a default: they are printing 17 decimal digits, out of a type that
only supports 15.

    
    
       $ txr -p '(+ .1 .2)'
       0.3
    

Why is that? Here, the underlying type is the C double type. The printed
representation is obtained according to a default precision. That default is
taken from the C constant DBL_DIG. For an IEE754 double, that constant is 15.

It is misleading to print more decimal digits out of a double than DBL_DIG;
that is the constant which tells you how many decimal digits of precision
double can reliably store. Thus I chose that constant as the default printing
precision for floats: to give the programmer/user the maximum realistic
precision. That is to say, print the decimal digits which are _plausibly_
there, and not any fictional ones.

17 decimal digits requires a 57 bit mantissa -- ceil(log 10 / log 2) * 17).
The 64 bit double type has only 52. So if you print 17, you're "making shit
up".

~~~
StefanKarpinski
This is a common but fundamentally wrong view of floating-point numbers. You
are not "making shit up" by printing more that 15 digits of a floating-point
number. Floats, doubles, etc. have precise values. As I said in another
comment, the double represented by the literal `0.1` is precisely
0.1000000000000000055511151231257827021181583404541015625; similarly, `0.2` is
precisely 0.200000000000000011102230246251565404236316680908203125 and `0.3`
is precisely 0.299999999999999988897769753748434595763683319091796875. From
these exact values, you can see why `0.1 + 0.2 != 0.3` – the left hand side is
greater than 0.3 while the right hand side is smaller than 0.3. Printing only
15 digits is not doing programmers any favors: even though `0.1 + 0.2` will
print as "0.3", that just makes the lie even worse – they will be even more
confused when `0.1 + 0.2 != 0.3` even though both values print identically.

~~~
kazinator
Your claim is fallacious, because a floating-point value denotes a _range_ of
the real number line, whereas you're insisting that, no, it stands for its
_literal_ value: the absolutely precise rational number that lies at the
center of that range.

That is no more true than my current body temperature being precisely 36 4/10
degrees Celsius because the thermometer reads 36.1.

All numbers in that range alias to the same double, yet differ wildly in their
decimal digits beyond the fifteenth. That "long tail" of digits is a
meaningless residue arising from the arbitrary difference between the chosen
center-of-range point and the actual number it approximates.

> _Printing only 15 digits is not doing programmers any favors_

Note that the ISO C function printf uses 6 digits of precision under the %g
conversion specifier by default. So a team of experts can find it justifiable
to severly truncate precision on printing. I find that justifiable also, like
this: printing is not only for constants, and trivial expressions like 0.1 +
0.2, but for the results of complex calculations, which accumulate significant
rounding errors. Six digits of precision is prudent: for many complex
calculations, it will avoid misleading the user with too much precision.
Fifteen decimal digits will be wrong after just a few operations; there is
hardly any "headroom" to absorb error.

> _just makes the lie even worse_

If merely neglecting to reveal some aspect of the stark truth is tantamount to
lying, then you're also lying when you pull a number with 54 significant
decimal figures out of a 64 bit double. Revealing some truth without an
explanation can also be construed as "lying", as in "[N]one of woman born
shall harm Macbeth".

I personally find it convenient that when I bang the token 0.3 into my REPL,
it comes back with a tidy 0.3. I know that there are several values of double
which will print as 0.3, and don't compare with exact equality except in
special circumstances (like when deliberately using integral values not far
from zero).

~~~
acidflask
> Your claim is fallacious, because a floating-point value denotes a range of
> the real number line, whereas you're insisting that, no, it stands for its
> literal value: the absolutely precise rational number that lies at the
> center of that range

See Misconception #2 of [http://lipforge.ens-
lyon.fr/www/crlibm/documents/cern.pdf](http://lipforge.ens-
lyon.fr/www/crlibm/documents/cern.pdf).

Floating point numbers are _not_ intervals. If you read carefully any formal
definition of floating point numbers (The IEEE standards, TAoCP Chapter 4, or
Higham's _Accuracy and Stability of Numerical Algorithms_, to name just three
possible references), you will see that floating point numbers by definition
form an exact rational subset of the extended real line.

> All numbers in that range alias to the same double, yet differ wildly in
> their decimal digits beyond the fifteenth. That "long tail" of digits is a
> meaningless residue arising from the arbitrary difference between the chosen
> center-of-range point and the actual number it approximates.

See Misconceptions #1 and #2 on Kahan's list
([https://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf](https://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf)).

You are not being clear about the set of real numbers a user may input and
their ultimate representation as a floating point number. While it's true that
many real numbers round to the same floating point number _f_, that's not the
same as then saying that _f_ carries that interval around with it. The latter
is false, since the intervals do _not_ propagate in floating point arithmetic.
It's also false that the floating point number _f_ is the midpoint of the set
of numbers that round to _f_; the precise set depends on the rounding mode and
the granularity of the set of floating point numbers around _f_.

~~~
kazinator
I didn't say they were _intervals_ (I'm well aware of interval representations
for numbers, which single point rationals are not).

> _that floating point numbers by definition form an exact rational subset of
> the extended real line._

Sure, that's what they _form_ ; it's not what they (usually) _denote_.

> _It 's also false that the floating point number _f_ is the midpoint of the
> set of numbers that round to _f_;_

It is close enough to being the case for my purpose in the grandparent
article. Whether the cluster of numbers is lopsided one way or the other
doesn't really detract from my point.

~~~
StefanKarpinski
Are they specific values or are they intervals? You can't have it both ways.

------
ThePhysicist
I just have to post this:

[http://www.smbc-comics.com/?id=2999](http://www.smbc-comics.com/?id=2999)

So it seems SQL does not give access to the secret robot Internet...

~~~
scrollaway
I think this sort of captcha could actually work, hah.

~~~
JupiterMoon
For a short while but I'm sure that bot makers would use a decimal library if
this became common.

------
Someone
For real fun, use Excel:

    
    
        =0.1+0.2-0.3        => 0
    
        =(0.1+0.2-0.3)      => 5.55112E-17
    

(inspired by
[http://www.cs.berkeley.edu/~wkahan/Mindless.pdf](http://www.cs.berkeley.edu/~wkahan/Mindless.pdf))

~~~
sdegutis
Tried it in Numbers and they both say 0.0000000000000000555111512312578

EDIT: Numbers is
[http://www.apple.com/mac/numbers/](http://www.apple.com/mac/numbers/)

~~~
ajross
That... is not a known hardware floating point representation. There are way
too many digits. This is a just-plain-buggy conversion routine I'm guessing?
What is Numbers, and why isn't it using a proper arbitrary precision library
if it doesn't want to use doubles?

Edit: sorry, my brain was still reading the leading 0.3 from the linked
article. There is indeed nothing wrong with that representation except the
nonstandard leading zeroes.

~~~
merpnderp
A double can hold 15-17 significant base 10 digits. The exponent can be
+-10^308 (11 bits for the exponent, 52 bits for the fraction, 1 bit for the
sign).

~~~
ajross
Yes yes yes, I know how to represent an IEEE value. It was an interpretation
error: the linked article shows a zillion variants of numbers with a leading
significant of 0.3 (i.e. all the leading zeros are significant) while the
posted string above did not.

------
annnnd
What Every Computer Scientist Should Know About Floating-Point Arithmetic:
[https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)

~~~
tbronchain
Here is the part that made me understand it all: "Floating-point
representations have a base (which is always assumed to be even) and a
precision p. If = 10 and p = 3, then the number 0.1 is represented as 1.00 ×
10-1. If = 2 and p = 24, then the decimal number 0.1 cannot be represented
exactly, but is approximately 1.10011001100110011001101 × 2-4."

------
wereHamster
In Haskell/GHC it depends on what representation you chose. Default is Double,
which exhibits this problem. But if I force it to Float I get the 'correct'
result. Or if I use the 'Scientific' data type (which has arbitrary
precision). Just like 'Int' vs 'Integer', where the former is restricted by
the machine's word size, while the later is unbounded.

    
    
        Prelude> 0.1 + 0.2 :: Float
        0.3
        Prelude> 0.1 + 0.2 :: Double
        0.30000000000000004
        Prelude> :m +Data.Scientific
        Prelude Data.Scientific> 0.1 + 0.2 :: Scientific
        0.3

~~~
headmelted
So effectively, in Haskell, it's both and neither.

A Float will round, as expected.

A double will return the correct value as it, should.

Shouldn't the Scientific, if it has arbitrary precision, arrive at the correct
value too?

Clearly the only logical conclusion is that the Glasgow compiler, at its
roots, virtualizes the entire quantum mechanics model as a compilation step,
enabling superposition-as-a-feature.

My futile attempts at humour aside, this is quite an interesting thing to know
and exactly the kind of intricate little detail I was hoping for.

Tip of the hat, friend.

~~~
wereHamster
The Scientific type does arrive at the correct value. 0.3 is the correct
answer, and Scientific gives it to you. I don't know why you would expect
something else from a type which implements arbitrary precision arithmetic.

------
avian
I'm not sure what was the intended message of the table of examples. It is
possible in many languages to get both types of ASCII representations for
floating point: a nice human-readable 0.3 and the technically correct
0.30000000000000004. But some examples seem artificially biased towards one or
the other. Only for Python both variants are shown. And then some examples use
decimal representations, which are not floating point at all.

For instance, in Perl "print 0.1+0.2" will give you "0.3" just like in Python,
while the printf format shown here will give you the non-rounded
representation. And there are of-course decimal and rational number packages
for Perl as well.

~~~
Tepix
Yep, it's better to print the result of 0.1+0.2-0.3

~~~
bmn_
Not generally!

    
    
        $ perl -E 'say sprintf "%.64f", 0.1 + 0.2'
        0.3000000000000000000108420217248550443400745280086994171142578125
        $ perl -E 'say sprintf "%.64f", 0.1 + 0.2 - 0.3'
        0.0000000000000000000000000000000000000000000000000000000000000000
    

My Perl is configured with -Dusemorebits.
[https://metacpan.org/pod/distribution/perl/INSTALL#more-
bits](https://metacpan.org/pod/distribution/perl/INSTALL#more-bits)

------
headmelted
So I have a question.

For those that arrive at the right answer, is it by mathematical correctness
in the implementation or by rounding down?

i.e. Which are right and which are flukey in their wrongness?

Under any other circumstance I wouldn't care about this pointless detail, but
seeing as how this is the top post on the front page, I'd like an
excruciatingly detailed investigation please.

Don't make me not get round to doing it myself, sir.

~~~
Joeri
In PHP's case, it converts the value to a printable (string) representation,
and in so doing rounds off to a precision of 14 digits.

[http://lxr.php.net/xref/PHP_5_4/Zend/zend_operators.c#2142](http://lxr.php.net/xref/PHP_5_4/Zend/zend_operators.c#2142)

The "mathematically" correct answer is 0.30000000000000004 according to the
IEEE-754 spec.

~~~
lsaferite
If you want PHP to print it like the other languages you have to set the
output precision high enough. The default, as you mentioned, is 14 but if you
set it to 17 then you get the 'correct' output.

[https://3v4l.org/TRcJB](https://3v4l.org/TRcJB)

[http://php.net/manual/en/ini.core.php#ini.precision](http://php.net/manual/en/ini.core.php#ini.precision)

~~~
McGlockenshire
Another option is to use printf, as like other languages it can take a
precision argument.

    
    
        printf('%.17f', 0.1 + 0.2); // 0.30000000000000004

------
jfindley
Go is actually a lot more confusing about this than this than this page makes
clear.

a := .1; b := .2

fmt.Println(a+b) returns 0.30000000000000004

fmt.Println(.1+.2) returns 0.3

Even worse, this persists into comparisons:

c := .3

a+b == c returns false

.1+.2 == c returns true

It's not a particularly big issue, mostly, but I'm sure it's confused someone
before now, and will do so again.

Demo code:
[http://play.golang.org/p/FOv0JQRQJN](http://play.golang.org/p/FOv0JQRQJN)

~~~
sacado2
Yay that's weird. Where's the magic?

~~~
1wd
Go is specified to calculate constant expressions with higher precision (256
bits of mantissa).
[https://golang.org/ref/spec#Constants](https://golang.org/ref/spec#Constants)

------
kozak
Let's start another round of DEC64 discussion? ;-)

[http://www.dec64.com/](http://www.dec64.com/)

------
Walkman
Here is a very nice website that gives solution also, not just describes the
problem: [http://floating-point-gui.de/](http://floating-point-gui.de/)

It helped me a lot when started learning.

------
teh
Python's print method is actually fairly clever picking the "nicest"
representation that's still correct.

[https://docs.python.org/2/tutorial/floatingpoint.html#tut-
fp...](https://docs.python.org/2/tutorial/floatingpoint.html#tut-fp-issues)

Quote:

"In current versions, Python displays a value based on the shortest decimal
fraction that rounds correctly back to the true binary value, resulting simply
in ‘0.1’."

~~~
mkesper
Additionally, for exact calculations you might want to use the decimal module.
[https://docs.python.org/3.5/library/decimal.html#module-
deci...](https://docs.python.org/3.5/library/decimal.html#module-decimal)

~~~
fulafel
Try the usual decimally-unrepresentable numbers (like 1/3).

~~~
mkesper
As illumen pointed out, Python has also got fractions. For things like Pi
however, there is no exact numerical representation and you'd probably want
sympy or something similar.

~~~
fulafel
Sure. But the inability to represent 1/3 in decimal is just the same as
inability to represent 1/10 in binary. The binary case just seems different
because we are using a foreign base (decimal) literal "0.1" to express the
binary number.

------
jamesdutc
I also posted this below as a reply.

Python uses David Gay's algorithm for printing the shortest equivalent
floating point representation.

Implementation details and discussion are available from the Python bug
tracker:
[https://bugs.python.org/issue1580](https://bugs.python.org/issue1580)

There are a few broken links. I believe the original poster references the
following paper by Robert Burger & Kent Dybvig, "Printing Floating-Point
Numbers Quickly and Accurately":

[http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-
PLDI96.pdf](http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-PLDI96.pdf)

There's also a reference to the following paper by William Clinger, "How to
Read Floating Point Numbers Accurately":

[http://www.cesura17.net/~will/professional/research/papers/r...](http://www.cesura17.net/~will/professional/research/papers/retrospective.pdf)

ftp://ftp.ccs.neu.edu/pub/people/will/retrospective.pdf

I believe this is the David Gay paper, "Correctly Rounded Binary-Decimal and
Decimal-Binary Conversions":

[http://www.ampl.com/REFS/rounding.pdf](http://www.ampl.com/REFS/rounding.pdf)

By the way, this has previously been discussed on HN (which is where I think I
first read all of this...):

[https://news.ycombinator.com/item?id=2708983](https://news.ycombinator.com/item?id=2708983)

~~~
3JPLW
There's also the GRISU algorithm:
[http://dl.acm.org/citation.cfm?id=1806623](http://dl.acm.org/citation.cfm?id=1806623)

------
jdaley
What are the differences in implementation of the languages that cause some to
show a rounding error and others not?

~~~
morsch
I suspect in many instances it's down to the method doing the printing being
"helpful". The page documents this behaviour with Python. I've never
specifically used C#, but it seems to default to the "G" format[1], the
alternative "R" format shows the expected result. You can play around with it
here: [http://ideone.com/hxD96J](http://ideone.com/hxD96J)

Printing out is kind of a misleading check here, maybe a better idea would be
to test equality with the constant 0.3. Of course the site is just a neat
illustration and not a technical whitepaper.

[1] [https://msdn.microsoft.com/en-
us/library/dwhawy9k(v=vs.110)....](https://msdn.microsoft.com/en-
us/library/dwhawy9k\(v=vs.110\).aspx) and [https://msdn.microsoft.com/en-
us/library/3hfd35ad(v=vs.110)....](https://msdn.microsoft.com/en-
us/library/3hfd35ad\(v=vs.110\).aspx)

~~~
yellowstuff
Yes, I agree this page is misleading. It makes it look like C# is using
rational numbers instead of floating point. In fact the "G" format defaults to
15 digits of precision for doubles, which is few enough to avoid ugly strings
in many simple situations, but not all:

> Console.WriteLine((.1 + .2)-.3); > Result: 5.55111512312578E-17

------
speeder
There is one langauge that once I complained to the authors that it was
broken: Lua.

Lua uses floating point by default, it can't make integer math at all, I made
a game using Lua, back then DirectX had a bug where it would switch the
floating point mode of the processor without permission, and sometimes the
result was some extreme FPU imprecision.

Since Lua uses only the FPU for maths, even for integers, the result was that
my game had lots of extremely bizarre results sometimes on Windows (while on
Linux and OSX it worked as expected), it happened only once, but I saw 5+6
result in 13...

It was a really hard thing to debug, specially because of the flaming (when I
asked about this on Lua IRC and forums, people flamed me endlessy, happily one
guy in particular suggested me to check if it was the DX issue, and indeed it
was).

~~~
mortehu
> Lua uses floating point by default, it can't make integer math at all

JavaScript does this too. The fact is that with 64 bit floating point numbers,
you can exactly represent every integer from -9007199254740992 to
9007199254740992, which is larger than the range of `int` on most systems
anyway. 5+6 resulting in 13 is not an artifact of using floating point
numbers. CPUs or GPUs don't randomly lose precision like that.

32 bit floating point numbers can exactly represent every integer from
-16777216 to 16777216.

See also [http://www.lua.org/pil/2.3.html](http://www.lua.org/pil/2.3.html)

~~~
speeder
By the way, your reply was one of the most common replies to me that quickly
turned into flaming (people offending me in all way they could to tell me Lua
was perfect, and I was seeing things, because 32-bit is enough for the
integers I was using...)

------
teddyh
This table does not show which languages have support for Decimal numbers,
which does the calculation exactly.

~~~
protonfish
Can it calculate 1/3 exactly? You can in trinary: It's 0.1.

Binary, decimal, trinary, or any other numbering system is no more accurate or
"exact" than any other. They all have fractions that they cannot represent.

~~~
recursive
For that you'd need a Rational.

------
lovboat
An explanation of the following useful fact:A fraction p/q is equal to a
finite length floating point number in base b <=> q divides b^n <=> the prime
factors of q are prime factors of b.

Proof if q divides b^n, then q * c = b^n => p/q = (p * c)/(q * c) = (p *
c)/b^n = (p * c)e-n in base b.

Example: Express the fraction 7/18 (base 10), in base 60 and convert it to a
floating point in base 60.

60 = 2^2 * 3^1 * 5^1, 18 = 2^1 * 3^2, look for c such that 18 * c is a power
of 60. Observe that the factors of 60^2 all have exponents that are greater or
equal than those of 18, so we can multiply 18 by the required factors to get
(60^2). This way: 60^2 = (2^2 * 3 * 5)^2 = 2^4 * 3^2 * 5^2 = 18 * c = (2^1 *
3^2) * (2^3 * 3^0 * 5^2)= 18 _200, so we need to multiply by 200 the numerator
and denominator of the initial fraction to get a new fraction whose
denominator is a power of 60. Now 7 /18 = (7 _ 200)/(18 * 200) = 1400/(60^2)
and 1400 = 23 * 60 + 20.

Finally 7/18 = 1400/(60^2) = 0.2360 in base 60.

We use the notation 00,01,02, ... 59 for the digits of base 60, so 2360 is a
two digits number in base 60.

The asterisk used for multiplication is not shown in HN, don't know why.

------
Kristine1975
The author forgot Microsoft Excel. There's even a Wikipedia page about it:
[https://en.wikipedia.org/wiki/Numeric_precision_in_Microsoft...](https://en.wikipedia.org/wiki/Numeric_precision_in_Microsoft_Excel)

Not that other spreadsheets (LibreOffice, Apple's Numbers) are better: They
also use binary floating-point data types instead of a decimal one.

------
whitej125
I remember, once upon a time... a bug where we were trying to find the average
credit rating (AAA, AA+, AA, etc) of a portfolio of companies. And we had this
issue where, in a simple degenerate case, the average of AA+ and AA+ became AA
(AA+ is expected). In the end, it was an issue of floating point error
accounting! (This was in C++ btw on a probably not-so-familiar platform).

TLDR; floor(13.0) != 13 floor(13.0) == 12

Each credit rating was mapped to a integer. We'd take the floating point
average of all the integers and then floor it. The integral answer would be
used to map back to a credit rating.

Let's say AA+ was mapped to the number 13. Well floor( (float(13) + float(13))
/ 2.0) == 12. Not 13.

That's because floating point 13.0 is really 12.99999999999.

The fix was to add accounting for floating point error by adding in ULP
([https://en.wikipedia.org/wiki/Unit_in_the_last_place](https://en.wikipedia.org/wiki/Unit_in_the_last_place))
which in essence tips the representation of 13.0 to 13.00000000001 (or
whatever the real number is).

~~~
nwhitehead
This might be true on really weird platforms, but with IEEE-754 floating point
this isn't true. Integers within a big range are exactly representable without
any rounding error. Doing floor((float(13) + float(13))/2.0) should be exactly
13. It is entirely possible that the weird platform was supposed to be using
IEEE-754 arithmetic but had a buggy math library (depressingly common).

------
Angostura
Interesting that Swift and Objective-C give different results.

~~~
such_a_casual
They do not. Open up a swift repl and try it.

------
_Codemonkeyism
FWIW

scala> BigDecimal("0.1") + BigDecimal("0.2")

res0: scala.math.BigDecimal = 0.3

Or as I wrote in 2008 [http://codemonkeyism.com/once-and-for-all-do-not-use-
double-...](http://codemonkeyism.com/once-and-for-all-do-not-use-double-for-
money/)

------
ikeboy
Relevant smbc and xkcd:

[http://www.smbc-comics.com/?id=2999](http://www.smbc-comics.com/?id=2999)
[https://www.xkcd.com/217/](https://www.xkcd.com/217/)

------
ceronman
Recommended watch: "Demystifying Floating Point" by John Farrier at CppCon
2015.

[https://www.youtube.com/watch?v=k12BJGSc2Nc](https://www.youtube.com/watch?v=k12BJGSc2Nc)

------
xedarius
Why don't all these languages fold down to the same FPU add and hence produce
the same output (on the same hardware)?

Are you telling me they implement their own version of the IEEE float
standard? Surely that way madness lies.

~~~
dpkendal
The difference is in how the result is converted to a string, not in how the
result is produced.

~~~
xedarius
gotcha

------
NamPNQ
Rust: [https://play.rust-
lang.org/?gist=c02ab67dab3e180b5cf9&versio...](https://play.rust-
lang.org/?gist=c02ab67dab3e180b5cf9&version=stable)

~~~
curun1r
It's not quite that simple: [https://play.rust-
lang.org/?gist=7f6ba6bcd1c50868928a&versio...](https://play.rust-
lang.org/?gist=7f6ba6bcd1c50868928a&version=stable)

------
m_coder
I know that no one here cares but VBA returns 0.3

Edit: however a better test of ?.1+.2-.3 in the immediate window now returns
2.77555756156289E-17 I wondered about that first result...

------
insertion
I would like to see a language use decimal by default. Under the hood, it
could use floating point for certain types of calculations where less
precision is required. If I want to know the answer to 0.2 + 0.1, chances are
I want the answer to be 0.3. If I'm writing performance-sensitive code, maybe
I can drop down a level and opt for float(0.2) + float(0.1). Are any languages
doing this currently?

~~~
dpkendal
Perl 6 uses rationals by default. They have the advantages of being base-
agnostic, able to accurately represent any recurring digit expansion
accurately regardless of eventual base, and also faster (since, especially if
you normalize (convert to lowest terms) lazily, most operations are just a few
integer instructions with no branching, looping, or bit-twiddling involved).

~~~
grondilu
True. Also notice that if the user wants a floating point, the literal format
is the one with the e<exponent> suffix:

    
    
        0.1e0, 1e-1, 0.2e0, 2e-1 etc.

------
forrestthewoods
Floating point math is fun. Here's a post I did awhile back the deals with
similar-ish shenanigans. [https://medium.com/forrest-the-woods/perfect-
prevention-of-i...](https://medium.com/forrest-the-woods/perfect-prevention-
of-int-overflows-f000f7e893ee)

------
GnwbZHiU
anyone understand why is it 0.30000000000000004, why not 0.30000000000000002
or 0.30000000000000003 or something else?

~~~
dbaupp
Floating point is (usually) about "closest representable value". That is, the
normal floats form a finite subset of the real numbers, and a calculation like
x + y will pick the closest element to the true result.

In this case, adding the closest floating point value to "0.1"
(0x1999999999999a / 2^56) and the closest one to "0.2" (0x1999999999999a /
2^55) gives

    
    
      0x13333333333334 / 2^54
    

which is exactly

    
    
      0.3000000000000000444089209850062616169452667236328125
    

(Rational numbers with a denominator that's a power of two have a finite
decimal representation, and that's the full one for this number.)

However, when printing, one is either interested in a close-enough rounded
value (e.g. like some languages print 0.3 by default, or how %.3f will
explicitly limit to 3 digits), or an exact representation. The exact
representation should be round-trippable, so when you read it back in, you get
exactly the same value. In practice this means printing a float z as a decimal
value such that the closest float to the exact decimal value (i.e. as a real
number) is z. Obviously the full string above works because it is exact, but
it is nice to be as short as possible. In this case, the floats immediately
before and after our number are exactly

    
    
      0.299999999999999988897769753748434595763683319091796875
      0.300000000000000099920072216264088638126850128173828125
    

So, only printing up to the first 4 is enough to uniquely identify our float:
it is the closest one.

The 0.2999* float is slightly interesting: it is the closest float to 0.3, and
most environments will print "0.3" in exact mode.

There's a lot of trickery/research around printing[1] and operating on floats:
rounding things correctly can be highly non-trivial[2].

[1]: e.g. [http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-
PLDI96.pdf](http://www.cs.indiana.edu/~dyb/pubs/FP-Printing-PLDI96.pdf)

[2]: [https://en.wikipedia.org/wiki/Rounding#Table-
maker.27s_dilem...](https://en.wikipedia.org/wiki/Rounding#Table-
maker.27s_dilemma)

------
jrapdx3
FWIW, tried this with two less common languages:

    
    
        Tcl: + 0.1 0.2 
          0.30000000000000004
    
        Chicken Scheme: (+ 0.1 0.2)
          0.3
    

Same results under Windows 8.1 and FreeBSD 10.2. I suppose one is as "right"
as the other, at least both answers equally popular among languages.

~~~
dTal
Also Chicken Scheme:

    
    
      (- 0.3 (+ 0.1 0.2))
        -5.55111512312578e-17
    

Of course you can always (use numbers), but then you have to use (/ 1 20) and
(/ 2 10). "0.3" is still considered "inexact", because of the decimal point. I
don't like this - finite decimals are perfectly rational numbers and trivially
convertible, why can't the interpreter treat them as such?

------
panic
This is more a comparison of output routines than anything else. In PHP, one
of the languages that gives the "correct" result:

    
    
        php > echo .1 + .2;
        0.3
        php > if (.1 + .2 === .3) { echo "equal"; } else { echo "not equal"; }
        not equal

~~~
nolok
Errr that's exactly what the page implies; it does not say nor imply "those
that display .3 are doing it correctly internaly".

One of the cardinal rule of floating point in computing is that you never do
direct comparison between values, whatever the language, so your test should
never ever be used. That doesn't mean a language cannot be allowed to infer
what the correct display of a floating point value should be.

------
felipesabino
Recommended:

Floating Point Numbers - Computerphile
[https://www.youtube.com/watch?v=PZRI1IfStY0](https://www.youtube.com/watch?v=PZRI1IfStY0)

Tom Scott's fun and simple take on Floating Points is a good video explaining
why such weird errors happen

------
Roboprog
Along the same lines, but slightly different emphasis (fixed precision for
accounting):
[http://roboprogs.com/devel/2010.02.html](http://roboprogs.com/devel/2010.02.html)

------
dzhiurgis
Salesforce Apex:

    
    
      system.debug( 0.1 + 0.2 );
      09:25:37:027 USER_DEBUG [3]|DEBUG|0.3
    

However

    
    
      double a = 0.1;
      double b = 0.2;
      system.debug( a + b );
      09:26:57:043 USER_DEBUG [4]|DEBUG|0.30000000000000004

------
Bouncingsoul1
For C# i see a different behaviour as in his example. double test = 0.1 + 0.2;
//will be 0.30000000000000004 if you check in the Debugger
Debug.Print(test.ToString());//will print 0.3

------
dTal
Discussion of this issue from 70 days ago here:

[https://news.ycombinator.com/item?id=10168799](https://news.ycombinator.com/item?id=10168799)

------
junke
The Common Lisp version is slightly misleading because computations are made
by default with single-float types, whereas there is also short-float, double-
float and long-float.

~~~
nabla9
default for _READ-DEFAULT-FLOAT-FORMAT_ is implementation dependent and can be
changed by user.

In CL type float has subtypes single-float, double-float, short-float, and
long-float. "Any two of them must be either disjoint types or the same type;
if the same type, then any other types between them in the above ordering must
also be the same type. For example, if the type single-float and the type
long-float are the same type, then the type double-float must be the same type
also."

If there is only one float representation it has to be `(typep x 'single-
float)` but if the internal representation is different (double-float for
example) it can be all of the types. At least this is how I understand the
spec.

[http://clhs.lisp.se/Body/v_rd_def.htm#STread-default-
float-f...](http://clhs.lisp.se/Body/v_rd_def.htm#STread-default-float-
formatST)

[http://clhs.lisp.se/Body/t_float.htm#float](http://clhs.lisp.se/Body/t_float.htm#float)

[http://clhs.lisp.se/Body/t_short_.htm](http://clhs.lisp.se/Body/t_short_.htm)

~~~
junke
Indeed. Note also that there are recommended minimum precisions and exponent
sizes, which are followed in practice by CL implementations. That means that
if double and long are the same type, the precision and size is at least the
one recommended for long-float.

------
soegaard
Anyone care to explain the details on a Texas Instruments NSpire calculator
(or CAS)?

    
    
        a:=0.1+0.2
        b:=0.3
        a-b     |>   0.
    

Using the non-exact mode.

~~~
acqq
Calculators have traditionally used base 10 internally, not base 2, what is
the base of the numbers on that TI? That's the answer, if it's base 10, .1, .2
and .3 are all exact.

~~~
acqq
The Wikipedia mentions base 10 with TI Basic:

[https://en.wikipedia.org/wiki/TI-BASIC](https://en.wikipedia.org/wiki/TI-
BASIC)

"Real numbers, using _decimal floating point._ These store up to 14
significant digits depending on the calculator model."

~~~
soegaard
This is it.

An example: (1/3)*3.-1 gives -1E-14

------
acidflask
Obligatory SMBC reference

[http://www.smbc-comics.com/?id=2999](http://www.smbc-comics.com/?id=2999)

------
pbreit
So, best way to store currency amounts? Cents or decimal? Would the answer
change based on platform or DB (i.e., Python, Postgres)?

~~~
AlterEgo20
Any SQL DBMS is able to store and handle NUMERIC data type. It is a binary
decimal with any precision you may want.

Some of the DBMS's also have dedicated currency types.

------
noiv
Math tells me there is an infinite number of floats between 0 and 1. 64 bits
are really not enough to address that numberspace...

------
amq
They learn this in the first lectures of a CS study.

------
nikolay
A bit unfair as Java has BigDecmial.

------
mpu
Bc and dc do exact arithmetic.

------
eccstartup
What a good domain name!

~~~
erikwiffin
That's 90% of the reason this site exists.

------
X000
AweSome Domain Name ... :-D :v :v :v !¡!¡

------
such_a_casual
edit: The OP has accepted my pull request and this explanation is now directly
available on the website.

It's a shame that the web page doesn't explain why floating point numbers are
inaccurate when it comes to decimals. It's actually pretty simple. When you
have a base 10 system (like ours), it can only express fractions that use a
prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4,
1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all
use prime factors of 10. In contrast, 1/3, 1/6, and 1/7 are all repeating
decimals because their denominators use a prime factor of 3 or 7.

In binary (or base 2), the only prime factor is 2. So you can only express
fractions cleanly which only contain 2 as a prime factor. In binary, 1/2, 1/4,
1/8 would all be expressed cleanly as decimals. While, 1/5 or 1/10 would be
repeating decimals.

So 0.1 and 0.2 (1/10 and 1/5) while clean decimals in a base 10 system, are
repeating decimals in the base 2 system the computer is operating in. When you
do math on these repeating decimals, you end up with leftovers which carry
over when you convert the computer's base 2 (binary) number into a more human
readable base 10 number.

~~~
StefanKarpinski
Spot on. I would add to this explanation that the real confusion comes from
the fact that in most languages when you write `0.1`, it doesn't mean 0.1 =
1/10 – instead it means the closest number to 0.1 of the form n/2^k where n <
2^53. Namely 0.1000000000000000055511151231257827021181583404541015625, which
we can represent in decimal because 2 divides 10. And, of course, the illusion
that `0.1` is actually 0.1 is supported by the great efforts [1] made to print
floating-point values with the least number of digits necessary to reproduce
them, so `0.1` still _looks_ like 0.1 when you print it even though it isn't.

[1] [http://www.cs.tufts.edu/~nr/cs257/archive/florian-
loitsch/pr...](http://www.cs.tufts.edu/~nr/cs257/archive/florian-
loitsch/printf.pdf)

~~~
fryguy
In decimal if you only have 5 digits for mantissa and you write 2/3, it
becomes 0.66667 which is a little bit more than 2/3\. Similarly, if you write
the binary number 0.0000'0000'1 in decimal with 5 digits for mantissa, it
becomes 0.0039063 instead of 0.00390625. If in some faraway land they use
binary math, they would likely ask why such a simple binary number got rounded
off.

------
coderdude
Naturally it's the langs you don't care about that have nailed it. Import
decimal to not fail. Thanks Py. Won't forget your shortcomings ever

------
irascible
They're close enough for government work.

------
icemelt8
Solid proof that C# is better than JAVA. :D

