
0.30000000000000004 - beznet
https://0.30000000000000004.com/
======
mcv
The big issue here is what you're going to use your numbers for. If you're
going to do a lot of fast floating point operations for something like
graphics or neural networks, these errors are fine. Speed is more important
than exact accuracy.

If you're handling money, or numbers representing some other real, important
concern where accuracy matters, most likely any number you intend to show to
the user as a number, floats are not what you need.

Back when I started using Groovy, I was very pleased to discover that Groovy's
default decimal number literal was translated to a BigDecimal rather than a
float. For any sort of website, 9 times out of 10, that's what you need.

I'd really appreciate it if Javascript had a native decimal number type like
that.

~~~
umanwizard
Decimal numbers are not conceptually any more or less exact than binary
numbers. For example, you can't represent 1/3 exactly in decimal, just like
you can't represent 1/5 exactly in binary.

When handling money, we care about _faithfully reproducing the human-centric
quirks of decimal numbers_ , not "being more accurate". There's no reason in
principle to regard a system that can't represent 1/3 as being fundamentally
more accurate because it happens to be able to represent 1/5.

~~~
NohatCoder
Money are really best dealt with as integers, any time you'd use a non-integer
number, use some fixed multiple that makes it an integer, then divide by the
excess factor at the end of the calculation. For instance computing 2.15%
yearly interest on a bank account might be done as follows:

    
    
      DaysInYear = 366
      InterestRate = 215
      DayBalanceSum = 0
      for each Day in Year
        DayBalanceSum += Day.Balance
      InterestRaw = DayBalanceSum * InterestRate
      InterestRaw += DaysInYear * 5000
      
      Interest = InterestRaw / (DaysInYear * 10000)
      Balance += Interest
    

Balance should always be expressed in the smallest fraction of currency that
we conventionally round to, like 1 yen or 1/100 dollar. Adding in half of the
divisor before dividing effectively turns floor division into correctly
rounded division.

~~~
vanni
This is called fixed-point arithmetic:

[https://en.wikipedia.org/wiki/Fixed-
point_arithmetic](https://en.wikipedia.org/wiki/Fixed-point_arithmetic)

> In computing, a fixed-point number representation is a real data type for a
> number that has a fixed number of digits after (and sometimes also before)
> the radix point.

> A value of a fixed-point data type is essentially an integer that is scaled
> by an implicit specific factor determined by the type.

~~~
NohatCoder
Yeah, though that notion tends to come with some conceptual shortcomings, like
presuming a power of 10 radix. In the above code the radix is implicitly
different on leap years, applying such tricks is usually not possible with a
fixed point library or language construct.

------
dspillett
MS Excel tries to be clever and disguise the most common places this is
noticed.

Give it =0.1+0.2-0.3 and it will see what you are trying to do and return 0.

Give it anything slightly more complicated such as =(0.1+0.2-0.3) and this
won't trip, in this example displaying 5.55112E-17 or similar.

~~~
piadodjanho
Are you sure it is not showing the exact answer because the the the cell
precision set to a single decimal digit?

~~~
zingmars
Yup: [https://i.imgur.com/VuawaE1.png](https://i.imgur.com/VuawaE1.png), on
Excel v1911 (Build 12228.20332).

------
_bxg1
I remember in college when we learned about this and I had the thought, "Why
don't we just store the numerator and denominator?", and threw together a
little C++ class complete with (then novel, to me) operator-overloads, which
implemented the concept. I felt very proud of myself. Then years later I
learned that it's a thing people actually use:
[https://en.wikipedia.org/wiki/Rational_data_type](https://en.wikipedia.org/wiki/Rational_data_type)

~~~
loopz
It's actually in use in many places, for things like handling currency and
money, and for when you get funny corner cases involving rounding such numbers
and pooling the change.

Whenever I see someone handling currency in floats, something inside me wither
and die a small death.

~~~
_bxg1
I haven't worked in fintech but I've read that money is often represented (at
least in storage) as plain integers, since for example US currency only ever
goes to two decimal places. But I guess once you start operating on it you run
into potential truncation unless you use rationals.

~~~
bwilliams18
US currency can go to more than two decimal places...

[http://blogs.reuters.com/ben-walsh/2013/11/18/do-stocks-
real...](http://blogs.reuters.com/ben-walsh/2013/11/18/do-stocks-really-trade-
for-fractions-of-a-penny-sort-of/)

I guess it's time for someone to write an "Assumptions Programmers make about
money" post.

~~~
nitwit005
No it can't. There are systems that track things worth less than a penny for
later billing, but at the end of the month when they bill someone, they do
some sort of rounding.

~~~
jcranmer
Someone should tell that to everyone who ever used a ½¢ coin in the US. Also,
US law explicitly states (31 USC §5101) that the unit of 1/1000th of a dollar
is a mill.

~~~
coldtea
> _Someone should tell that to everyone who ever used a ½¢ coin in the US._

All 10 of them?

~~~
jsjohnst
I’ve got some at home, but admittedly I’d never use one as currency. I also
have US ½¢ paper notes too.

------
dang
A thread from 2017.00000000000:
[https://news.ycombinator.com/item?id=14018450](https://news.ycombinator.com/item?id=14018450)

2015.000000000000:
[https://news.ycombinator.com/item?id=10558871](https://news.ycombinator.com/item?id=10558871)

~~~
umanwizard
FWIW, both of those can be expressed exactly by floating-point numbers ;)

~~~
IshKebab
Right all integers up to 2^53 (or something like that) can be (in double
precision).

I assume that's the reason they made the mantissa linear, even though having
the whole thing logarithmic makes more sense.

~~~
nine_k
What. Mantissa is already logarithmic, bit number _n_ has value _2^(n - N-1)_
for an _N_ -bit mantissa.

This is how positional number systems work at all.

~~~
IshKebab
The mantissa is linear. It's unrelated to how positional number systems work.
A floating point value is split into two numbers - the exponent and the
mantissa. Normally they are used to represent a final number like:

    
    
        x = 2^e * (1 + m)
    

Where e is the exponent and m is the mantissa (varying _linearly_ from 0 to
1).

But you could have a fully exponential number format:

    
    
        x = 2^(m + o)
    

As pointed out though, it makes addition much more complicated, you can't
exactly represent integers, and someone told me it makes quantisation noise
worse too. Bad idea.

------
mark-r
Also the subject of one of the most popular questions on StackOverflow:
[https://stackoverflow.com/q/588004/5987](https://stackoverflow.com/q/588004/5987)

------
lordnacho
While it's true that floating point has its limitations, this stuff about not
using it for money seems overblown to me. I've worked in finance for many
years, and it really doesn't matter that much. There are de minimis clauses in
contracts that basically say "forget about the fractions of a cent". Of course
it might still trip up your position checking code, but that's easily fixed
with a tiny tolerance.

~~~
dionian
when the fractions actually dont matter... its so painless just to just store
everything in pennies rather than dollars (multiply everything by 100)

~~~
wruza
It’s not painless. E.g. dividing $100.00 by 12 month in integer cents requires
11 $8.33 and one $8.37 (or better 4x(2x8.33+8.34), depending on definition of
‘better’). You can forget this $0.04, but it will jump around in reports until
you get rid of it – it requires someone’s attention anyway, no matter how
small it is. Otoh, in unrounded floating point that will lead to a mismatch
between (integer) payments and calculations. In rounded fp it’s the same
problem, except when you’re trying very hard for error bits to accumulate
(like cross-multiplying dataset sums with no intermediate rounding, which is
nonsense in financial calc and where regular fixpoint integers will overflow
anyway).

What I’m trying to show here is that _both_ integers and floating point are
not suitable for doing ‘simple’ financial math. But we get used to this
Bresenhamming in integers and do not perceive it as solving an error
correction problem.

~~~
Waterluvian
This struck home with me when one day a friend and I bought the same thing and
he paid a penny more.

I realized something I didn't ever notice or appreciate in 20+ years: oh yeah,
they can't just round off the penny in their favour every time. And the code
that handles tracking when to charge someone an extra penny must be irritating
to have developed and manage. All of a sudden you've got state.

~~~
tricolon
What kind of thing and store was it?

------
GuB-42
That's one of the worst domain name ever. When the topic comes along, I always
remember about "that single-serving website with a domain name that looks like
a number" and then take a surprisingly long time searching for it.

I have written a test framework and I am quite familiar with these problems,
and comparing floating point numbers is a PITA. I had users complaining that
0.3 is not 0.3.

The code managing these comparisons turned out to be more complex than
expected. The idea is that values are represented as ranges, so, for example,
the IEEE-754 "0.3" is represented as ]0.299~, 0.300~[ which makes it equal to
a true 0.3, because 0.3 is within that range.

~~~
mynameisvlad
It's the first result for "floating point site" on Google. Sure the domain
itself is impossible to remember, but you don't have to remember the actual
number, just what it stands for.

~~~
usr1106
Remember filter bubble. My first result is not your first result. (although in
this case it happens to be, but we both probably search a lot on programming)

~~~
mynameisvlad
Also did it in an InPrivate window to confirm, which is still somewhat
targeted but far less so than on my actual account. It's still first.

And, at the end of the day, even if there's a filter bubble and it's the
reason I see it first, then so what? The people looking for this site are
likely going to fit into the same set of targeted demographics as you and me
and most people on this site. So unless you also want to cater to 65-year old
retirees that don't care about computer science and what floating numbers are,
then why does the filter bubble even matter?

------
mc3
This is a good thing to be aware of.

Also the "field" of floating point numbers is not commutative†, (can run on JS
console:)

x=0;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; }; x+=1

\--> 1.000000000000001

x=1;for (let i=0; i<10000; i++) { x+=0.0000000000000000001; };

\--> 1

Although most of the time a+b===b+a can be relied on. And for most of the
stuff we do on the web it's fine!††

† edit: Please s/commutative/associative/, thanks for the comments below.

†† edit: that's wrong! Replace with (a+b)+c === a+(b+c)

~~~
gus_massa
Note that the addition is commutative [1], i.e. a+b==b+a always.

What is failing is associativity, i.e. (a+b)+c==a+(b+c)

For example

(.0000000000000001 + .0000000000000001 ) + 1.0

\--> 1.0000000000000002

.0000000000000001 + (.0000000000000001 + 1.0)

\--> 1.0

In your example, you are mixing both properties,

(.0000000000000001 + .0000000000000001) + 1.0

\--> 1.0000000000000002

(1.0 + .0000000000000001) + .0000000000000001

\--> 1.0

but the difference is caused by the lack of associativity, not by the lack of
commutativity.

[1] Perhaps you must exclude -0.0. I think it is commutative even with -0.0,
but I'm never 100% sure.

~~~
thaumasiotes
I tried to determine how to perform IEEE 754 addition (in order to see whether
it's commutative) by reading the standard: [https://sci-
hub.tw/10.1109/IEEESTD.2019.8766229](https://sci-
hub.tw/10.1109/IEEESTD.2019.8766229)

(Well, it's a big document. I searched for the string "addition", which occurs
just 41 times.)

I failed, but I believe I can show that the standard requires addition to be
commutative in all cases:

1\. "Clause 5 of this standard specifies the result of a single arithmetic
operation." (§10.1)

2\. "All conforming implementations of this standard shall provide the
operations listed in this clause for all supported arithmetic formats, except
as stated below. Unless otherwise specified, each of the computational
operations specified by this standard that returns a numeric result shall be
performed as if it first produced an intermediate result correct to infinite
precision and with unbounded range, and then rounded that intermediate result,
if necessary, to fit in the destination’s format" (§5.1)

Obviously, addition of real numbers is commutative, so the intermediate result
produced for addition(a,b) must be equal to that produced for addition(b,a). I
hope, but cannot guarantee, that the rounding applied to that intermediate
result would not then depend on the order of operands provided to the addition
operator.

3\. "The operation addition(x, y) computes x+y. The preferred exponent is
min(Q(x), Q(y))." (§5.4.1). This is the entire definition of addition, as far
as I could find. (It's also defined, just above this statement, as being a
general-computational operation. According to §5.1, a general-computational
operation is one which produces floating-point or integer results, rounds all
results according to §4, and might signal floating-point exceptions according
to §7.)

4\. The standard _encourages_ programming language implementations to treat
IEEE 754 addition as commutative (§10.4):

> A language implementation preserves the literal meaning of the source code
> by, for example:

> \- Applying the properties of real numbers to floating-point expressions
> only when they preserve numerical results and flags raised:

> \-- Applying the commutative law only to operations, such as addition and
> multiplication, for which neither the numerical values of the results, nor
> the representations of the results, depend on the order of the operands.

> \-- Applying the associative or distributive laws only when they preserve
> numerical results and flags raised.

> \-- Applying the identity laws (0 + x and 1 × x) only when they preserve
> numerical results and flags raised.

This looks like a guarantee that, in IEEE 754 addition, "the representation of
the result" (i.e. the sign/exponent/significand triple, or a special infinite
or NaN value - §3.2) does not "depend on the order of the operands". §3.2
specifically allows an implementation to map multiple bitstrings ("encodings")
to a single "representation", so it's possible that the bit pattern of the
result of an addition may differ depending on the order of the addends.

5\. "Except for the quantize operation, the value of a floating-point result
(and hence its cohort) is determined by the operation and the operands’
values; it is never dependent on the representation or encoding of an
operand."

"The selection of a particular representation for a floating-point result is
dependent on the operands’ representations, as described below, but is not
affected by their encoding." (both from §5.2)

HOWEVER...

6\. §6, dealing with infinite and NaN values, implicitly contemplates that
there might be a distinction between addition(a,b) and addition(b,a):

> Operations on infinite operands are usually exact and therefore signal no
> exceptions, including, among others,

> \- addition(∞, x), addition(x, ∞), subtraction(∞, x), or subtraction(x, ∞),
> for finite x (§6.1)

------
maxdamantus
I feel like it should really be emphasised that the reason this occurs is due
to a mismatch between binary exponentiation and decimal exponentiation.

 _0.1 = 1 × 10^-1_ , but there is no integer significand _s_ and integer
exponent _e_ such that _0.1 = s × 2^e_.

When this issue comes up, people seem to often talk about fixing it by using
decimal floats or fixed-point numbers (using some _10^x_ divisor). If you
change the base, you solve the problem of representing _0.1_ , but whatever
base you choose, you're going to have unrepresentable rationals. Base 2 fails
to represent _1 /10_ just as base 10 fails to represent _1 /3_. All you're
doing by using something based around the number _10_ is supporting numbers
that we expect to be able to write on paper, not solving some fundamental
issue of number representation.

Also, binary-coded decimal is irrelevant. The thing you're wanting to change
is _which_ base is used, not how any integers are represented in memory.

~~~
lopmotr
Agree. All of these floating point quirks are not actually problems if you
think of them as being finite precision approximations to real numbers, not in
any particular base. Just like physical measurements of continuous quantities.
You wouldn't be surprised to find an error in the 15th significant figure of
some measurement or attempt to compare them for equality or whatever. So don't
do it with floating point numbers either and everything will work perfectly.

Yes, there are some exceptions where you can reliably compare equality or get
exact decimal values or whatever, but those are kind of hacks that you can
only take advantage of by breaking the abstraction.

------
ufo
One small tip about printf for floating point numbers. In addition to "%f",
you can also print them using "%g". While the precision specifier in %f refers
to digits after the decimal period, in %g the precision refers to the number
of significant digits. The %g version is also allowed to use exponential
notation, which often results in more pleasant-looking output than %f.

    
    
       printf("%.4g", 1.125e10) --> 1.125e+10
       printf("%.4f", 1.125e10) --> 11250000000.0000

~~~
kps
And %e _always_ uses exponential notation. Then there's %a, which can be exact
for binary floats.

------
amyjess
One of my favorite things about Perl 6 is that decimal-looking literals are
stored as rationals. If you actually want a float, you have to use scientific
notation.

Edit: Oh wait, it's listed in the main article under Raku. Forgot about the
name change.

------
lelf
That’s only formatting.

The other (and more important) matter, — that is not even mentioned, — is
comparison. E. g. in “rational by default in this specific case” languages
(Perl 6),

    
    
      > 0.1+0.2==0.3
      True
    

Or, APL (now they are floats there! But comparison is special)

    
    
          0.1+0.2
      0.3
          ⎕PP←20 ⋄ 0.1+0.2
      0.30000000000000004
          (0.1+0.2) ≡ 0.3
      1

~~~
Athas
Exactly what are the rules for the "special comparison" in APL? That sounds
horrifying to me.

~~~
lelf
Assume the values could be equal if the relative error of the operation is
greater than a small predefined value (called “⎕ct”, comparison tolerance, and
you can change it).

~~~
enriquto
but this is not an equivalence relation. You may have a=b and b=c but a!=c

it's horrifying!

------
DonHopkins
The runner up for length is FORTRAN with:
0.300000000000000000000000000000000039

And the length (but not value) winner is GO with:
0.299999999999999988897769753748434595763683319091796875

~~~
exegete
Those look like the same length

~~~
Thorrez
Huh? The fortran one is 38 characters long with 33 0s after the 3. The go one
is 56 characters long with 15 9s after the 2.

~~~
exegete
I’m on mobile. Must be the issue.

------
jonny_eh
> It's actually pretty simple

The explanation then goes on to be very complex. e.g. "it can only express
fractions that use a prime factor of the base".

Please don't say things like this when explaining things to people, it makes
them feel stupid if it doesn't click with the first explanation.

I suggest instead "It's actually rather interesting".

~~~
headmelted
Ditto as I now feel stupid.

I read the rest of your reply but I also haven’t let go of the possibility
that we’re both (or precisely 100.000000001% of us collectively) are as thick
as a stump.

~~~
omar_a1
To be fair, this is also done in every other STEM field, and CS is no
exception.

We could all learn a lot more from each other if everything wasn't a contest
all the time.

------
garyclarke27
Postgresql figured this out many years ago with their Decimal/Numeric type. It
can handle any size number and it performs fractional arithmetic perfectly
accurately - how amazingly for the 21st Century! Is comically tragic to me
that all of the mainstream programming languages are still so far behind, so
primitive that they do not have a native accurate number type that can handle
fractions.

~~~
josefx
> how amazingly for the 21st Century!

Most languages have classes for that, some had them for decades in fact.
Hardware floating point numbers target performance and most likely beat any of
those classes by orders of magnitude.

------
Ididntdothis
I still remember when I encountered this and nobody else in the office knew
about it either. We speculated about broken CPUs and compilers until somebody
found a newsgroup post that explained everything. Makes me wonder why we
haven't switched to a better floating point model in the last decades. It will
probably be slower but a lot of problems could be avoided.

~~~
maxdamantus
Unless you have a floating point model that supports arbitrary bases, you're
always going to have the issue. Binary floats are unable to represent 1/10
just as decimal floats are unable to represent 1/3.

And in case anyone's wondering about handling it by representing the repeating
digits instead, here's the decimal representation of 1/12345 using repeating
digits:

    
    
      0.0[0008100445524503847711624139327663021466180639935196435803969218307006885378
      69582827055488051842851356824625354394491697043337383556095585257189145402997164
      84406642365330093155123531794248683677602268124746861077359254759011745646010530
      57918185500202511138112596192790603483191575536654515998379910895099230457675172
      13446739570676387201296071283920615633859862292426083434588902389631429728635074
      92912110166059133252328878088294856217091940056703118671526933981368975293641150
      26326447954637505062778452814904819765087079789388416362899959497772377480761441
      87930336168489266909680032401782098015390846496557310652085864722559740785743215
      87687322802754151478331308221952207371405427298501417577966788173349534224382341
      02875658161198865937626569461320372620494127176994734710409072498987444309437019
      03604698258404212231672742]

~~~
amelius
> Unless you have a floating point model that supports arbitrary bases

See also binary coded decimals.

[https://en.wikipedia.org/wiki/Binary-
coded_decimal](https://en.wikipedia.org/wiki/Binary-coded_decimal)

~~~
snickerbockers
That's not a floating point.

~~~
amelius
From the article:

> Programmable calculators manufactured by Texas Instruments, Hewlett-Packard,
> and others typically employ a floating-point BCD format, typically with two
> or three digits for the (decimal) exponent.

~~~
snickerbockers
Then that's how they're encoding the components of the float. BCD itself is
not a floating-point, it's just a different way of encoding a fixed-point or
integer. If all you want to do is use floating point but expand the logarithm
and mantissa then that's completely tangential to whether or not they're
stored as BCD or regular binary values.

------
combatentropy
In JavaScript, you could use a library like decimal.js. For simple situations,
could you not just convert the final result to a precision of 15 or less?

    
    
      > 0.1 + 0.2;
      < 0.30000000000000004
    
      > (0.1 + 0.2).toPrecision(15);
      < "0.300000000000000"
    

From Wikipedia: "If a decimal string with at most 15 significant digits is
converted to IEEE 754 double-precision representation, and then converted back
to a decimal string with the same number of digits, the final result should
match the original string." \--- [https://en.wikipedia.org/wiki/Double-
precision_floating-poin...](https://en.wikipedia.org/wiki/Double-
precision_floating-point_format)

------
ChuckMcM
That is why I only used base 2310 for my floating point numbers :-). FWIW
there are some really interesting decimal format floating point libraries out
there (see [http://speleotrove.com/decimal/](http://speleotrove.com/decimal/)
and [https://github.com/MARTIMM/Decimal](https://github.com/MARTIMM/Decimal))
and the early computers had decimal as a native type
([https://en.wikipedia.org/wiki/Decimal_computer#Early_compute...](https://en.wikipedia.org/wiki/Decimal_computer#Early_computers))

~~~
ergfdseragf
The multiplication of the first 5 primes ;)

------
skohan
This is part of the reason Swift Numerics is helping to make it much nicer to
do numerical computing in Swift.

[https://swift.org/blog/numerics/](https://swift.org/blog/numerics/)

~~~
mvelie
Swift also has decimal (so does objective-c) which handles this properly. See
[https://lists.swift.org/pipermail/swift-users/Week-of-
Mon-20...](https://lists.swift.org/pipermail/swift-users/Week-of-
Mon-20161219/004220.html) to see how swift's implementation of decimal differs
from obj-c.

------
gowld
This is a great shibboleth for identifying mature programmers who understand
the complexity of computers, vs arrogant people who wonder aloud how systems
developers and language designers could get such a "simple" thing wrong.

~~~
hutzlibu
" vs arrogant people who wonder aloud how systems developers and language
designers could get such a "simple" thing wrong."

I never heard anyone complain that it would be simple to fix. But complaining?
Yes - and rightfully so. Not every webprogrammer need to know the hw details
and don't want to, so it is understandable that this causes irritation.

------
dunham
Interesting, I searched for "1.2-1.0" on google. The calculator comes up and
it briefly flashes 0.19999999999999996 (and no calculator buttons) before
changing to 0.2. This happens inconsistently on reload.

------
YeGoblynQueenne
Swi-Prolog (listed int he article) also supports rationals:

    
    
      ?- A is rationalize(0.1 + 0.2), format('~50f~n', [A]).
      0.30000000000000000000000000000000000000000000000000
      A = 3 rdiv 10.

------
okennedy
This specific issue nearly drove me insane trying to debug a SQL ->
C++/Scala/OCaml transpiler years ago. We were using the TPC-H benchmark as
part of our test suite, and (unbeknownst to me), the validation parameters for
one of the queries (Q6) triggered this behavior (0.6+0.1 != 0.7), but only in
the C/Scala targets. OCaml (around which we had built most of our debugging
infrastructure) handled the math correctly...

Fun times.

------
goosehonk
When did RFC1035 get thrown under the bus? According to it, with respect to
domain name labels, "They must start with a letter" (2.3.1).

~~~
jlv2
Long, long ago. 3com.com wanted to exist.

~~~
yellowapple
Amazingly, 3.com apparently _didn 't_ want to exist.

------
dec0dedab0de
I wish high level languages (specifically python) would default to using
decimal, and only use a float when cast specifically. From what I understand
that would make things slower, but as a higher level language you're already
making the trade of running things slower to be easier to understand.

That said, it's one of my favorite trivia gotchas.

------
mytailorisrich
Fixed-point calculations seem to be somewhat of a lost art these days.

It used to be widespread because floating point processors were rare and any
floating point computation was costly.

That's not longer the case and everyone seems to immediately use floating
point arithmetic without being fully aware of the limitations and/or without
considering the precision needed.

------
qwerty456127
As soon as I've started developing real-life business apps I've started to
dream about a POWER which is said to have hardware decimal type support.
Javs's BigDecimal solves the problem on x86 but it is at least an order of
magnitude more slow than FPU-accelerated types.

~~~
ernst_klim
Well, if your decimals are fixed-point decimals, which is the case in finance,
decimal calculations are very cheap integer calculations (with simple
additional scaling in multiplication/division).

I just use Zarith (bignum library) in OCaml for decimal calculation, and
pretty content with performance.

I don't think much domains needs decimal floating point that much, honestly,
at least in finance and scientific calculations.

But I could be wrong, and would be interested in cases where decimal floating-
point calculations are preferable over these done in decimal fixed-point or
IEEE floating-point ones.

~~~
qwerty456127
Why doesn't everybody do it this way then? We would probably have a
transparent built-in decimal type in every major language by now if there were
no problems with this.

~~~
ernst_klim
> Why doesn't everybody do it this way then?

Why? Fintech uses decimal fixed-point all the way, there are libraries for
them for any major language. Apps like GnuCash or ledger use them as well.

~~~
qwerty456127
But Java has BigDecimal in its standard library and it's soooo slow I doubt it
is implemented this way.

------
povik
In the Go example, can someone explain the difference between the first and
the last case?

~~~
ehsankia
There's a link right below. It seems like

1\. Constants have arbitrary precision 2\. When you assign them, they lose
precision (example 2) 3\. You can format at as a arbitrary precision in a
string (example 3)

In that last example, they are getting 54 significant digits in base 10.

~~~
povik
Thanks. What I didn’t realize is that although the sum is done precisely, the
resulting 0.3 will be represented approximately once converted to float64. In
the first case formatting hides that, in the last it doesn’t.

~~~
ehsankia
I think in the last example, it's going straight from arbitrary precision to
54 significant digit, bypassing float64 entirely, hence why it looks different
from the middle example.

------
tus88
Mods: Can we have a top level menu option called "Floating point explained"?

------
gumby
Not surprisingly Common Lisp gets it right. I don’t mean this is snark (I
don’t mean to imply you are a weenie if you don’t use lisp) but just to show
that it picked a different kind of region in the language design domain.

------
thanatropism
Computer languages should default to fixed precision decimals and offer floats
with special syntax (eg “0.1f32”).

The status quo is that even Excel defaults to floats and wrong calculations
with dollars and cents are widespread.

------
Waterluvian
The thing that surprised me the most (because I never learned any of this in
school) was not just the lack of precision to represent some numbers, but that
precision falls off a cliff for very large numbers.

------
alberth
TL;DR - 0.1 in Base 2 (binary) is the equivalent of 1/3 in Base 10 meaning,
it’s a repeating decimal that causes rounding issues (0.333333 repeating)

This is why you should never do “does X == 0.1” because it might not evaluate
accurately

------
0xDEEPFAC
Whoo go Ada, one of the few to get it right. Must be the goto for secure
programming for a reason.

Take that Rust and C ; )

------
bluetwo
Happy to see ColdFusion doing it right. Also, good for Julia for having the
support for fractions.

------
cogburnd02
I love how Awk, bc, and dc all DTRT. I wonder what postscript(/Ghostscript?)
does.

------
xkriva11
for Smalltalk, the list is not complete, it has scalled decimals and fractions
too: 0.1s + 0.2s = 0.3s . (1/10) + (2/10) = (3/10)

------
adamc
Those Babylonians were ahead of their time.

------
threatofrain
Use Int types for programming logic.

~~~
saagarjha
Except when, you know, you can’t.

~~~
umanwizard
Curious, when can't you?

My mental model of floating-point types is that they are useful for
scientific/numeric computations where values are sampled from a probability
distribution and there is inherently noise, and not really useful for
discrete/exact logic.

~~~
saagarjha
Right; for the former integer arithmetic won't do.

~~~
umanwizard
Yep, absolutely (and increasingly often people are using 16-bit floats on GPUs
to go even faster).

But the person you replied to said programming _logic_ , not programming
anything.

Honestly I think if you care about the difference between `<` and `<=`, or if
you use `==` ever, it's a red flag that floating-point numbers might be the
wrong data type.

------
cellular
Why is D different than the rest?!

------
mttpgn
bc actually computes this correctly, and returns 0.3 for 0.1 + 0.2

------
idonotknowwhy
This has been posted here many times before. It even got mocked on n-gate in
2017
[http://n-gate.com/hackernews/2017/04/07/](http://n-gate.com/hackernews/2017/04/07/)

------
edisonjoao
lol what

------
beckerdo
Please check some of the online papers on Posit numbers and Unum computing,
especially by John Gustafson. In general, Unums can represent more numbers,
with less rounding, and fewer exceptions than floating points. Many software
and hardware vendors are starting to do interesting work with Posits.

~~~
StefanKarpinski
Probably one of the more in depth technical discussions of the pros and cons
of the various proposals that John Gustafson has made over the years:

[https://discourse.julialang.org/t/posits-a-new-approach-
coul...](https://discourse.julialang.org/t/posits-a-new-approach-could-sink-
floating-point-computation/26176)

------
pmarreck
IEEE floating-point is disgusting. The non-determinism and illusion of
accuracy is just wrong.

I use integer or fixed-point decimal if at all possible. If the algorithm
needs floats, I convert it to work with integer or fixed-point decimal
instead. (Or if possible, I see the decimal point as a "rendering concern" and
just do the math in integers and leave the view to put the decimal by whatever
my selected precision is.)

~~~
saagarjha
IEEE is deterministic and (IMO) quite well thought-out. What specifically do
you not like about it?

~~~
pmarreck
The fact that the most trivial floating-point addition of 0.1 + 0.2 =
0.300000000000004 was insufficient to make this seem HUMAN-nondeterministic to
you? (I mean sure, if you thoroughly understood the entire spec, you might not
be surprised by this result, but many people would be! Otherwise the original
post and website would not exist, no?)

It’s kind of a hallmark of bad design when you have to go into a long-winded
explanation of why even trivial use-case examples have “surprising” results.

~~~
jcranmer
⅓ to 3 decimal places is 0.333. 0.333 + 0.333 = 0.666, which is not ⅔ (to 3
decimal places, that is 0.667). That is all that is happening with the 0.1 +
0.2.

The word you're looking for is "surprising," which is a far cry from non-
deterministic. IEEE 754 is so thoroughly deterministic that there exists
compiler flags whose sole purpose is to say "I don't care that my result is
going to be off by a couple of bits from IEEE 754."

