
0.30000000000000004 - nixy
http://www.google.com/search?q=0.30000000000000004
======
heresy
I have to question the value of CS educations when a post of this nature pops
up every couple of weeks, as if it is news that floating point arithmetic as
implemented today, is by its nature an approximation, and that this is more
noticeable the less bits you have to play with, when working with values that
can't be represented neatly.

Financial arithmetic? Convert to smallest unit and use integers or the
currency data type du jour in your language, and don't act surprised when
operations on 32-bit floating point don't yield the intuitively correct
values.

If you understood the representation format, you'd understand why.

~~~
Cushman
I don't think it's just that it's an approximation. We're used to doing math
with approximations (0.33333...). What's unintuitive about it is that numbers
with very simple finite expressions in decimal (0.3) can only be expressed by
repeating binary floating points.

Even competent coders who usually work in higher-level languages can sometimes
forget that the decimal representations they work with are actually
approximations of binary numbers, which is responsible for other seemingly
weird behavior (why does ~2 = -3?), and I doubt even most skilled computer
scientists are used to converting decimals to arbitrary-precision binary in
their head.

It _is_ unintuitive, and it's not like it's hard to explain why. "You'd
understand if you were smarter" is a cop out.

~~~
cemerick
Things get a lot easier when you're working with tools that help you out, even
just a little.

Clojure here, FWIW:

(+ 0.1M 0.2M) => 0.3M

That 'M' suffix denotes a BigDecimal, which provides for arbitrary-precision
decimal math (which Clojure's arithmetic ops dispatch to as necessary).

A similar 'N' suffix is coming in the next release that denotes (contagious)
BigInteger math (though as fast as longs when values are < 2^31), so one
doesn't have to worry about overflow issues (which rate up there with
misunderstandings of internal floating point representations in terms of
frequency).

Other reader syntax is provided for other common notations (e.g. 10e6, 16rFF,
0xFF, 0220, 2r100010101).

~~~
heresy
This seems like the right way to do it.

Though, I assume non-suffixed literals are still regular floats?

~~~
masklinn
> This seems like the right way to do it.

It's also slow as molasses, which is why very few languages default to decimal
floats and use IEEE floats (or doubles) instead, via their hardware
implementation. The behavior of IEEE floats and doubles is very well defined,
and though they are unfit in some sectors (you do not want to count money
using them), have to be massaged a bit when displayed and don't deal well with
great differences in powers-of-10 (e.g. 1e21 + 1 == 1e21) they work well
enough in practice. And they're implemented in hardware.

~~~
heresy
I'm aware of the performance hit, I just like the approach of a BigDecimal
literal instead of having to write that boilerplate.

~~~
masklinn
> I just like the approach of a BigDecimal literal instead of having to write
> that boilerplate.

Ah yes. Well not all languages require a bunch of boilerplate either. In
Python for instance, there is a type decimal.Decimal which you can just alias
to `d` and write:

    
    
        val = d('0.1')

~~~
gxti
I do wish there were a syntax for Decimal literals, e.g. 0.1D (analogous to
100L as long, etc.). Maybe I'll make it my weekend project even if I don't
really expect it to be accepted into CPython proper.

~~~
tonyarkles
I'm pretty sure the L syntax has gone away in Python3, with numbers being
auto-promoted to Long if they're going to overflow. I'm a bit hazy about this,
but I think I talked to the maintainer over lunch at PyCon two years ago.

~~~
gxti
There's actually no externally-visible long type at all in Python 3, it's all
done transparently now. That said, I'm not using Python 3. The promoting
behavior you speak of is how Python 2 works.

------
tzs
Apple's Calculator app used to have an odd floating point rounding behavior (I
filed a bug report and they fixed it):

    
    
        14479.14 
        - 
        152.36 
        = 
        (result is 14326.78) 
        1143 
        / 
        78 
        = 
        (result is 14.6538461538461) 
        14479.14 
        - 
        152.36 
        = 
        (result is 14326.7799999999884)
    

Note that the first and third calculations are the same, yet they resulted in
different displayed results!

I never understood this bug. I understand floating point, so understand that
some numbers are not exactly representable. However, it should at least still
be consistent! The same calculation should give the same results every time.

~~~
hxa7241
I suppose it depends how exact you are in defining 'same calculation' -- two
possible causes of inconsistency are:

* FP means the rules of algebra do not hold, so if the calculation is done in a different equivalent form, you can get a different result.

* With Intel's old FPU unit, if a value is written to memory rather than staying in registers you can get a different result.

------
tmsh
Welcome to the best lectures of your life:

[http://webcast.berkeley.edu/course_details_new.php?seriesid=...](http://webcast.berkeley.edu/course_details_new.php?seriesid=2010-B-26353&semesterid=2010-B)

also available via iTunes U. I'm currently listening to them on my commute.
Note, if you do actually go through the whole course -- you'll need to listen
to a different year for lecture 24 or so -- that one is skipped. One of the
highlights of my day is actually coming home and re-looking up what he's
talking about.

(Oh, and of course you can just listen to the two Floating Point lectures. It
has to do with the non-uniform -- or at least non-linearly uniform mapping of
numbers, represented with a significand/mantissa and an exponent, onto the set
of real numbers + the fact that the exponent used is in base 2, in the
hardware, so the floating point numbers are spread about in a particular way.
The difference between real numbers, as you tick up the odometer with each
bit, varies depending on where you are in the number line (with big numbers,
it's actually much more, with smaller numbers it's pretty minimal, but not
unnoticeable as seen with this example. Does that make sense? Maybe I'm off
about this... Anyway, still obviously recommend the lectures. And now, I'm
going to read up more on ALUs and MUXs..)

------
erikwiffin
<http://0.30000000000000004.com/>

Because why not? I've populated it with some languages that I can convenient
access to an interpreter for. If you post/send me .1 + .2 in any other
languages, I'll try and put them up.

~~~
eru
GHC (Haskell):

    
    
      $ ghci
      GHCi, version 6.12.1: http://www.haskell.org/ghc/  :? for help
      0.1 Loading package ghc-prim ... linking ... done.
      Loading package integer-gmp ... linking ... done.
      Loading package base ... linking ... done.
      Prelude> 0.1 + 0.2
      0.30000000000000004
    

And just for fun with GHC's rational numbers:

    
    
      Prelude> :m + Data.Ratio
      Prelude Data.Ratio> (1 % 10) + (2 % 10)
      3 % 10
    

Hugs (Haskell):

    
    
      $ hugs
      Hugs> 0.1 + 0.2
      0.3
    

bc:

    
    
      $ bc
      0.1 + 0.2
      .3
    

Gforth:

    
    
      $ gforth 
      0.1e 0.2e f+ f. 0.3  ok
    

dc:

    
    
      $ dc
      0.1 0.2 + p
      .3

~~~
erikwiffin
Thanks, I've added them. Not only do I not have an interpreter for any of
these languages handy, I've never even used them!

~~~
eru
You're welcome.

bc and dc are probably installed on your Linux or Unix box.

By the way, you should fix "Below are some examples of sending .1 + .2 to
standard output in a variety of common languages." to "[...] in a variety of
languages."

It also seems like the names of bc and dc are non-capitalized.

For the Haskell entry, please just shorten it to "0.1 + 0.2". The "Prelude>"
thing is just a prompt for the REPL. ":m + Data.Ratio" loads the rational
number module, please take the entry about Haskell's rational numbers out
since yours is a page about floating point. (You might want to replace it with
a comment, that Haskell supports rational numbers. But so do lots of languages
in their libraries.)

C would be a good addition. (Plus Fortran, Cobol, Ada and J.)

~~~
erikwiffin
Fixed. I've moved the rational number stuff to a comment, I'm working on the
formatting, but I think what I've got now works.

~~~
eru
Good. By the way, having OR in the middle column and AND in the right column
seems a bit strange.

------
messel
This is the type of thing that makes mathematicians flying tackle computer
scientists.

the CS: "Hey, it's round off error. Get used to it"

the Mathematician: "Fix IT!!"

~~~
caf
It's not "round off error", though. More like:

"It's a base 2 fractional number with no exact decimal expansion with a finite
number of digits. Display it in base 2 fractional form if you don't want to
see an approximation."

~~~
aliguori
Well, it's more like:

Binary numbers are a countably infinite set. Decimal numbers are a countably
infinite set. You can therefore map binary representation to decimal
representation 1-1 with a simple mapping function.

OTOH, floating point numbers are an uncountably infinite set. The only way to
map binary numbers to floating point numbers is to map to a strict subset.

The subset happens to be different when mapping binary to floating point and
decimal to floating point when using IEEE754.

~~~
mseebach
The mathematician: That's fine for 22/7. Now, go fix your computer so it knows
how to count to 0.3, kthxbye

~~~
caf
0.3 is a rational too, it's just 3/10.

Anyway, mathematicians don't use _numbers_ at all - debasing the equations by
performing _caculations_ with them is so.. gauche. Leave that to the
physicists, chemists and engineers.

~~~
eru
I saw the occasional number in my studies.

We'd rather calculate with knows than numbers. (See
<http://en.wikipedia.org/wiki/Knot_theory>)

------
mseebach
It gets funnier: [http://ma.rtinseeba.ch/post/1344678900/double-considered-
har...](http://ma.rtinseeba.ch/post/1344678900/double-considered-harmful)

------
hartror
<http://stackoverflow.com/search?q=0.30000000000000004>

------
jsvaughan
From Slashdot:
[http://developers.slashdot.org/story/10/05/02/1427214/What-E...](http://developers.slashdot.org/story/10/05/02/1427214/What-
Every-Programmer-Should-Know-About-Floating-Point-
Arithmetic?from=rss&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+Slashdot/slashdot+\(Slashdot\))

Every programmer forum gets a steady stream of novice questions about numbers
not 'adding up.'...

------
sfphotoarts
Leaving ego's aside, while the reasons for this are obvious to anyone who
learned programming with binary and old school stuff like that, there is a
whole genre (class) of programmer that has learned what is needed to build web
sites and unless you have large scale issues (which you solve by hiring
someone with old school CS skills) you can happily be competent and successful
without every knowing about how languages implement floating point math. I
think it is wrong to belittle these people because I have worked with them and
sometimes, I have found, it is their skills that are more often more
influential in the success of a product that the CS guy in the back room
tweaking the slab allocator. Times have changed. It's no longer crucial for
every one who deserves the title 'developer' to know about these kinds of
language nuances.

------
voodootikigod
Brendan Eich (Creator of JS) discussed this here:
<http://www.aminutewithbrendan.com/pages/20101025> as it pertains to JS, but
it is similar for other languages that implement IEEE double precision numbers

~~~
DougBTX
I think ECMA script uses double-precision 64-bit binary format IEEE 754 values
for all number storage, or are there browser differences I don't know about
which cause problems?

------
Cushman
People often say "Just use (x) decimal arithmetic system for important stuff
like finances."

Out of curiosity, I'm wondering how much trade you would have to be doing for
floating-point imprecision to cause an actual problem.

Taking 0.2+0.1 as an example and figuring an imprecision of
$0.00000000000000004 per $0.30, figuring a loss of one cent as being
significant, I have 0.01/(0.00000000000000004 / 0.3) = 7.5e13, or... seventy-
five trillion dollars?

Never mind that you're as likely to get 0.6+0.1 = 0.69999999999999996, which
should roughly cancel out the error over time.

This is basically just an aesthetic problem in finance, yes?

~~~
flatline
There is a term for this in mathematics and now I can't remember it, but with
some systems, e.g. number crunching with large matrices, a very small change
like this can magnify into huge differences and give you totally wrong
results. It's definitely something you must be aware of and take into account.

~~~
regularfry
Conditional stability?

~~~
eru
The other related term is `stiffness'.

------
xtacy
If you really want precision and you're dealing only with rational numbers,
it's better to maintain a struct rational { u64 numerator, denominator; }; and
do all calculations with it.

------
eiji
Funny thing. Google has this page at rank 5 by now (after 42 mins).

~~~
LaPingvino
Hacker News has intrinsic value

------
protomyth
I remember the discussion on floating point and talking about BCD (we had
assembler on and IBM/370), but I get the feeling BCDs are not talked about
much anymore given some of the discussions I've had over the years. A related
thing to watch out for is any arithmetic with units and how to deal with
fractional conversions. This could cost you a lot of money or mess up and
inventory if handled poorly.

------
tung
People who see this fall into one of two kinds: those who are shocked by this,
and those who are shocked by those who are shocked by this.

~~~
maukdaddy
Actually there's a third category of us who understand why it happens and
understand why others don't understand.

------
SingAlong
I just tried this in javascript (via Firebug) out of curiosity and it's the
same problem.

0.1 + 0.2 = 0.30000000000000004

~~~
masklinn
> and it's the same problem.

it's not a problem.

~~~
ciupicri
It's not a bug, it's a feature or better said it's a known issue which should
be taken into account when writing a program that uses floating point numbers.

~~~
aidenn0
I consider it a bug in the specification that all numbers are floating-point
in javascript.

------
sdizdar
<http://docs.sun.com/source/806-3568/ncg_goldberg.html>

However, my problem is that modern language are hiding these things for you.

------
est
Is this IEEE-754 specific?

If so, are there any IEEE-754 alternatives?

~~~
TorKlingberg
Yes it is IEEE-754 specific. I think any binary floating point will have
similar pitfalls though.

Python has a Decimal type that can represent decimal values exactly. It must
be imported first though. It is probably much slower than IEEE-754 floating
point, but for many uses that is not an issue.
<http://docs.python.org/library/decimal.html>

------
DannoHung
If only IEEE 754-2008 would gain traction...

------
marcusfrex
irb(main):002:0> 0.1 + 0.2 => 0.30000000000000004

------
gnuvince
I want a QPU unit!

