
Using floating-point numbers for money - ingve
https://www.evanjones.ca/floating-point-money.html
======
antonyme
Sorry, but I remain skeptical. The same people who use `float` for financial
calculations are probably the same people who don't understand how/why to do
the rounding described to avoid these problems. Way too many programmers think
that they are working in base 10 when using float. Why not just make them use
it, and keep them out of trouble.

Also, I wonder what the overhead of rounding every operation is? Comparable to
the cost of using a proper Decimal class?

"if you are doing some financial math that does not need to be accurate to the
penny, just use floating point numbers."

I would argue that financial math by definition needs to be accurate to the
penny. Where is "pretty close" financial calculations considered acceptable?
Having worked at a bank, I know how seriously this sort of thing is taken.

From experience, working in scientific applications and numerical computing,
summing large numbers of floats is fraught with accuracy problems too.

~~~
dahart
> I wonder what the overhead of rounding every operation is? Comparable to the
> cost of using a proper Decimal class?

For what it’s worth, for the major ICs (Intel, NVIDIA, etc.) there is zero
extra overhead. A choice of rounding modes is part of the floating point
operation’s instruction. And keep in mind that a floating point op is _always_
rounding no matter what you do, the question is whether it’s always using the
same rounding strategy consistently, whether you can control it, and whether
it has what you need.

> I remain skeptical

You are correct.

There are good reasons not to use float for money that the article didn’t
discuss, and perhaps the author isn’t aware of. You run out of integer
precision at 2^24, which is only 16 million. If you process a 20 million
dollar payment in units of dollars, you might be off by at least a dollar.
That error will multiply with every floating point op done on the result. If
your units are pennies, the largest safe value is only 160,000 dollars. If you
ever subtract floats, like say make a payment or withdrawal, you can run into
catastrophic cancellation without knowing it. Deposit $200k and then withdraw
$199k, suddenly you have a small balance with large error that could remain in
your account and continue to grow until the balance is zero.
[https://pharr.org/matt/blog/2019/11/03/difference-of-
floats....](https://pharr.org/matt/blog/2019/11/03/difference-of-floats.html)

~~~
jbay808
It has great benefits to the user, though. Once you have $20 million in the
bank, you can continue to withdraw $1 at a time as often as you like without
spending any of your principal!

~~~
zitterbewegung
By benefits to the user which would mean the bank made a mistake and now the
IT department has a production P0 issue that will probably have everyone
working to fix that yesterday then sure.

------
kstenerud
No, do not do financial calculations in binary floating point. Converting
between base 2 and base 10 fractions can do funny things (including during the
rounding step, which is also 99% guaranteed to be inaccurate because base 10
fractions can't be exactly represented as base 2 fractions).

Most professional financial packages use fixed point decimal, which can easily
be implemented by specifying a fractional unit (such as thousandths of cents).
The computations will be faster (because they're ints) and at 64 bits you get
a min/max value of +- 9,223,372,036,854,775,807 units (90 quadrillion if
you're using thousandths of cents), and rounding functions will always be
exact (provided you have enough digits of scratch precision).

There's also the 2008 addition of decimal floating point types to ieee754 [1],
which have been implemented in gcc and clang (software emulation only).

So no, don't use binary floats for financial calculations. You're 99.9%
guaranteed to get it wrong and introduce bugs.

[1] [https://en.wikipedia.org/wiki/Decimal64_floating-
point_forma...](https://en.wikipedia.org/wiki/Decimal64_floating-point_format)

~~~
ComputerGuru
A notable exception to the general lack of floating decimal support is C#,
which has had floating decimal (System.Decimal) support in the CLR from the
start (128 bits with a scale of 0 to 28).

[https://docs.microsoft.com/en-us/dotnet/csharp/language-
refe...](https://docs.microsoft.com/en-us/dotnet/csharp/language-
reference/language-specification/types#the-decimal-type)

~~~
sixstringtheory
Apple SDKs have NSDecimal [0] and NSDecimalNumber [1].

[0]:
[https://developer.apple.com/documentation/foundation/nsdecim...](https://developer.apple.com/documentation/foundation/nsdecimal)

[1]:
[https://developer.apple.com/documentation/foundation/nsdecim...](https://developer.apple.com/documentation/foundation/nsdecimalnumber?language=objc)

------
sixstringtheory
The author links to an article about how Excel does all its computations in
64-bit floating point, in a way that seems to bolster their point. The actual
title of that article? “Floating-point arithmetic may give inaccurate results
in Excel” ([https://docs.microsoft.com/en-
us/office/troubleshoot/excel/f...](https://docs.microsoft.com/en-
us/office/troubleshoot/excel/floating-point-arithmetic-inaccurate-result)).

How coincidental that this should show up on the front page at the same time
as the thread on normalization of deviance in software:
[https://news.ycombinator.com/item?id=22144330](https://news.ycombinator.com/item?id=22144330)

What is wrong with using a currency library?

~~~
chrisseaton
> What is wrong with using a currency library?

Do currency libraries exist for example for GPUs? (Maybe they do.)

~~~
raverbashing
If you need your financial code to run in the GPU either you don't need fixed
point number (simulations) or your code is extremely inefficient.

~~~
chrisseaton
> you don't need fixed point number

That was my point.

------
OskarS
Of course you _can_ , but for the vast majority of cases, you _shouldn’t_.
Floating points aren’t designed to solve the problems financial calculations
bring, they’re designed for general purpose math and efficiency.

If you’re programming a point-of-sale system, or a ecommerce site or
something, using IEEE-754 floats would be madness. The increase in performance
compared to decimal types is absolutely infinitesimal, and the cost of getting
it wrong is MASSIVE. Just don’t do it!

Of course, if you really do need the performance advantage floats offer (e.g.
high-frequency trading or machine learning), then by all means, use floats.
But that’s a tiny minority of those programmers who have to deal with money.

~~~
fyfy18
How would you suggest to store and calculate things like taxes? For example in
NYC the retail sales tax is 8.735%. Obviously the final amount could be stored
as an integer in cents, but I'm talking about the tax rate and the
calculations of it. I guess you need to calculate the tax for each item, round
it to cents (and I assume the rules on how you round can vary by jurisdiction,
so it's not just calling your language's round function), multiply that by the
quantity of that item (good luck if the quantity can be a decimal value), then
sum each line item to get the total tax.

~~~
Yaa101
Easy, either implement a decimal type (4 fractional digits) as most
programming languages have them implemented by users. or multiply by 10000
then do your calculation and then divide by 10000 again. I used a decimal type
for the administration program I coded.

~~~
geofft
> _Easy, either implement a decimal type_

Great, how do you become confident that your hand-rolled decimal type is less
buggy than the known hazards of using floating-point?

~~~
jdlshore
If you’ve never done something like this, you might want to try it. It’s a
nice simple programming exercise and the basic version, with tests, will take
less than an evening.

------
anonytrary
This is not great advice, and he defeats his own argument without realizing
it. The main problem with this article is the damn title, which is absolutely
awful advice. The body of the article ultimately contradicts the title. Anyone
who didn't read the article but read the title might be more likely to use
floating point numbers for money now.

> One piece of popular programming wisdom is "never using floating-point
> numbers for money." This has never made sense to me.

He then goes on to explain why it absolutely does and must make sense. It
honestly seems like the author's entire point is about not accepting blanket
statements.

> My summary: if you are doing some financial math that does not need to be
> accurate to the penny, just use floating point numbers.

Arguably all financial math needs to be accurate to the penny. I can't help
but think he is maliciously trolling, trying to convince people of something
that isn't true.

------
mongol
From the summary: "if you are doing some financial math that does not need to
be accurate to the penny"...

Of course, if you can accept rounding errors, it is fine. But in many monetary
applications, that is not OK. The article is quite pointless.

Edit. For example, when doing budget forecasts, the result will never be
accurate to the penny. It is then OK. But when doing accounting, you need to
account for every penny. Not OK.

------
fit2rule
Disclaimer: 30+ year veteran at fixing these fucking bugs.

My opinion: just NO.

MULTIPLE failures to understand the problem have occurred in this article, and
they deal with _representation_.

A) It is never acceptable to 'round up' on interest
payments/loans/debts/balances, without having an explicit line item for why.
"Because compiler decision made by programmer" is indefensible.

B) Representation is EVERYTHING. Case in point:

>>>>0.1 + 0.2: Produces 0.30000000000000004 but should be exactly 0.3.

This statement can kill.

C) Standards are there to keep everyone, and everything, 'honest', and by
honest it means: everyone understands things implicitly. Explain to grandma
why a rounding error has resulted in her teeth falling out, or GTFO.

D) Fixed point is a standard _representation_. Its not about the precision-
fail, its about if everyone gets exactly the same results, every time, in
doing the math.

E) Lots and lots of these bugs yet to be fixed.

~~~
logicallee
Generally, is the penny indivisible in finance? (So is using integers fine for
everything, as long as you remember you you are dealing with cents, not
dollars)?

------
arcticbull
> Solution: Round after every operation

No, the solution is not to use floating point numbers to store money.

I worked on an iOS app in fintech for years, and let me assure you, using
floating point numbers to calculate currency is an exercise in frustration and
lack of correctness. When balances are wrong you're losing your customers
money, which in turn loses trust in your product. You know it's inexact, an
approximation, and your customers want an exact answer.

Just do it with a fixed-precision decimal number, and represent it as a
string.

~~~
Supermancho
> Just do it with a fixed-precision decimal number, and represent it as a
> string.

Why? If you are going to do that, might as well use an integer of cents. It's
more compact and if you're worried about these calculations and storing this
kind of information, it's likely you are storing lots of it (or it wouldn't be
a problem).

~~~
arcticbull
It’d need to be integer cents plus a scale factor which determines the
smallest unit of currency. Japan has a scale factor of 0 as compared to the US
2. A string incorporates this conveniently. Either is fine.

------
EastLondonCoder
Yes you _can_ but that's how you get bugs.

I used to work in the online gambling industry for 15 years and there are a
large number of use cases on a gambling front end where you do financial
arithmetic.

I have seen equality between 2 floats being used as setting the end condition
for a count up win animation. I don't think I have to explain what horrible
thing that lead to.

------
jdlshore
There’s more to money than what happens in the checkout line, and that’s where
the problems with floating-point math arises. For example, one codebase I’m
aware of ran into serious problems with line item refunds when currency
conversion was involved, IIRC. (“Serious” as in “couldn’t generate a refund
that exactly matched the charge on the customer’s original invoice.”)

~~~
gdm85
Exactly, the results are not reversible if you apply rounding.

------
andreareina
The funny thing is that if you're dealing with (US Treasury) bonds, (EDIT:
binary) floats are actually kind-of the right thing, since they trade in 32nds
of a dollar (and binary fractions thereof)[1][2]

> Unlike U.S. equity markets, which switched to decimal pricing in 2001, U.S.
> Treasury securities still trade in fractions. In particular, prices are
> quoted in 32nds of a point, where a point equals one percent of par, with
> the 32nds themselves split into fractions. On the BrokerTec platform, for
> example, 3- and 5-year notes trade in quarters of 32nds, whereas 7-, 10-,
> and 30-year securities trade in halves of 32nds. The quoted price for a
> 5-year note might be 98-15¼, for example, indicating a price in decimal form
> of 98.4765625 (that is, 98 + 15⁄32 + ¼⁄32).

[1]
[https://www.newyorkfed.org/aboutthefed/fedpoint/fed07.html](https://www.newyorkfed.org/aboutthefed/fedpoint/fed07.html)

[2] [https://www.bloomberg.com/opinion/articles/2020-01-15/it-
s-n...](https://www.bloomberg.com/opinion/articles/2020-01-15/it-s-not-
insider-trading-if-the-president-does-it)

~~~
jleahy
Unfortunately it's 32nds of a 100th, so they still don't work in floats
(because floats are stored normalised, as 1.x in binary).

Eg. 117.125

~~~
andreareina
Is it? The NY Fed site says it's 32nds of a dollar:

> • Prices are quoted in 32nds of a dollar.

> [...] Note and bond prices are quoted in dollars and fractions of a dollar.
> By market convention, the normal fraction used for Treasury security prices
> is 1/32.

~~~
jleahy
Right, but the price is about 100, not 1.

If you had a price of 1.25 then you could represent this as a float as
(binary) 1.01. However if you have 117.25 then that's (decimal) 1.83203125 *
2^-6. You can see how you quickly run out of bits.

------
andrewaylett
From the article: "I am not a theoretician and have not proven that this is
actually correct."

So, yes, you might get away with floating point values for financials -- and I
have even seen banking code that did -- but that doesn't mean it's a good
idea. Especially when libraries providing fixed point and arbitrary precision
decimal representations are widely available and easy to use.

The biggest problem with using IEE 754 for currencies is that IEE 754 defines
a domain and a set of operations that don't match the domain and set of
operations required for currency calculations. The discrepancies between the
"right" answers and the answers given by IEE 754 aren't because IEE 754 is in
some way "wrong", it's because it's not calculating what you think it's
calculating.

~~~
projektfu
I once had to send a bug report to a big bank, like Bank of America, because
their JavaScript front end wasn’t rounding and got the typical 0.1+0.2>0.3
error (or whatever). This meant that I couldn’t submit a dispute online and
had to do it on the phone. Like most companies, they don’t bother thanking
you, or letting you know if it’s fixed.

It’s great how coding style books rarely address these important things and
focus only on bookshedding. Anyone know of a good presubmit checklist for
semantic issues?

------
mackman
I think this is kind of funny. Using floats and rounding at some fixed
precision before rounding to penny precision is basically exactly the same as
using integer multiples of whatever you're first rounding the float to. So
you're basically not using floating point anymore, you're just using a float
type to represent fixed point integers. The problem as I see it here is that
you need to be careful to round to the fixed precision at the right places.
The easiest way to not miss a place is to do it after every operation, and the
best way to do that is to abstract your money type as a class. So now what
we're comparing is a fixed point class holding a float vs an int. In my
opinion, holding an int is the better option, because it's slightly more
obvious when the values overflow the maximum range of that integer than when
the precision of the float drops below your fixed point rounding delta. In
either case you need to add some error handling and I also prefer branching on
ints than floats (mostly because of a big perf difference on the CPU I used to
work on).

------
wmu
You can, but you shouldn't. Use of fixed-point numbers is way more reliable,
often faster as it's implemented with integer numbers, you can easily detect
overflow, you can increase/decrease precision if you need, etc.

------
ajnin
So this article's advice is to use rounding, just after he showed an example
where rounding didn't work. Also if you forget to round once you can let
errors accumulate.

> if you are doing some financial math that does not need to be accurate to
> the penny

There is no such thing. I worked with applications that dealt with tens of
billions of Euros, and if the bottom line was off by even 1 cent, the users
would come to us to figure out what went wrong. Suggesting that this is
acceptable to knowingly introduce errors, when there is a pretty well-
established practice that allows to avoid them entirely, is baffling.

------
Hackbraten
> Since we "know" the exact answers have a finite number of decimal digits, we
> can just round off the lower part of the numbers, which will produce the
> nearest float with that number of digits.

What does that even mean, “produce the nearest float with that number of
digits?”

If my float representation says e. g., 2.6749999999999998, and I “tell” it to
round to three digits, the result is still 2.6749999999999998 in floating
point representation, isn’t it?

A floating point number doesn’t care about the number of significant decimal
digits you want it to have.

~~~
yorwba
Good floating-point printing functions output the shortest possible
representation. So if 2.675 and 2.6749999999999998 are indistinguishable,
print(2.6749999999999998) will show 2.675

~~~
Hackbraten
My impression was that the article is about intermediate rounding between
calculation steps, rather than about printing a number to the screen.

~~~
andreareina
As far as (64-bit binary) floats are concerned, 2.675 and 2.6749999999999998
are in fact the same thing:

    
    
        $ python3
        Python 3.7.3 (default, Jun 16 2019, 16:10:46) 
        [Clang 8.0.0 (clang-800.0.42.1)] on darwin
        Type "help", "copyright", "credits" or "license" for more information.
        >>> 2.675 == 2.6749999999999998
        True
    

So IF you have enough excess precision at every step, and IF you do the
rounding right at each step, then you can get the same results using binary
fractions as with decimal.

Of course, correct rounding is still an issue when using a decimal type, since
round(sum(n for n in ns)) ≠ sum(round(n) for n in ns).

------
zxcvbn4038
Disagree with this one - problem is that floating point errors tend to be
cumulative and doing something simple like summing a year of sales records the
errors start to reflect in pennies surprising quickly. But luckily this has
been a solved problem for decades - fixed point and binary coded decimal.

[https://en.m.wikipedia.org/wiki/Fixed-
point_arithmetic](https://en.m.wikipedia.org/wiki/Fixed-point_arithmetic)

[https://en.m.wikipedia.org/wiki/Binary-
coded_decimal](https://en.m.wikipedia.org/wiki/Binary-coded_decimal)

------
jspash
I ran into a weird problem this week trying to convert a string "8.95" to an
integer (8.95) that made me question if even Int is good enough and as
suggested in the comments here, use BigInt or equivalent for anything to do
with numbers.

This also happens with Javascript and Elixir, not just Ruby. So perhaps it has
something to do with how the cpu stores the numbers?

I know this isn't StackOverflow, but could someone shed some light on to
what's happening here? [https://ideone.com/1U7rgf](https://ideone.com/1U7rgf)

~~~
sixstringtheory
The problem here isn’t integers at all, as 8.95 is not an integer but an
integer (8) plus a fraction (0.95 or 95/100==19/20). That’s in decimal, but
since computers use binary, your decimal fraction must be converted to a
binary fraction. This is what floating point is used for, although it’s
important to know that floating point is an approximation of the real number
line, and also that fractions that are rational in decimal may be irrational
in binary, namely if the decimal denominator is not a power of two.

Looks like that’s what’s going on with your example. Floating point
approximation of an otherwise unrepresentable irrational binary which resulted
in that repeating sequence when converted back to decimal again. I don’t know
how BigDecimal works but I guess just by the name it is not floating point but
rationals represented by arbitrary precision integer numerators and
denominators.

Definitely read up on how floating point representation works, with binary
mantissas and exponents, normalization, and converting decimal fractions to
binary fractions.

~~~
dragonwriter
> That’s in decimal, but since computers use binary

Well, if you tell them to, e.g., by using a binary floating point type instead
of a decimal type.

~~~
sixstringtheory
No, we aren’t using computers with logic gates that have 10 discrete states.
Even if you’re programming to decimal, the underlying representation is still
binary. But there are numerical differences between using decimals represented
by rationals with binary integer numerators and denominators, vs floating
point with binary mantissa and exponent.

~~~
dragonwriter
> No, we aren’t using computers with logic gates that have 10 discrete states.
> Even if you’re programming to decimal, the underlying representation is
> still binary.

That is not the sense of "computers use binary" that justifies the conclusions
in the sentence in which the phrase was used in the post being responded to,
so while true on its own, in context it is an example of the fallacy of
equivocation.

~~~
sixstringtheory
Could you help me understand where I’ve gone wrong here? I think there’s some
miscommunication going on.

~~~
dllthomas
Are you familiar with binary coded decimal? Decimal format floats are in
somewhat the same vein.

See, for instance, [https://en.m.wikipedia.org/wiki/Decimal32_floating-
point_for...](https://en.m.wikipedia.org/wiki/Decimal32_floating-point_format)

~~~
sixstringtheory
This is basically what I meant when I said "computers use binary"

~~~
dllthomas
That sense is trivially true, but it doesn't follow that you have binary
fractions.

~~~
sixstringtheory
Sorry, but I'm not parsing this sentence:

> it doesn't follow that you have binary fractions

Is that in reference to something I said earlier, or are we just falling
deeper down a rabbit hole of hair splitting?

The point I was driving at is that a given number we think about in decimal
like 12.3456 can be represented in binary in more than one way, one of which
are rationals backed by integers for num/den, another is floating point with
mantissa and exponent. Which way is chosen ultimately affects the outcomes of
computations. I was trying to explain why OP was seeing unexpected results. I
don't think I've been mistaken in the explanations...

~~~
dllthomas
The issue is that you are missing the possibility of decimal floating point,
which is also defined in IEEE 754 but which is typically not exposed as a
primitive in programming languages. Imagine a BCD mantissa. That _can_ encode
12.3456 exactly.

So it's not "since computers use binary", but "since we use binary floating
point". There are fair arguments that we made that choice because of
efficiency concerns driven by the fact that "computers use binary", but I
don't think that really helps the explanation.

Note that the original objection was picking at a nit that I am not sure I
would have picked at, I am just seeking to clarify.

------
seanalltogether
Why do so many languages not have a decimal type as a primitive value? Is it
legacy? Is it because a decimal type must fundamentally be an object that
wraps some other primitive values?

~~~
gdm85
Looks like Go v2 might get decimal floating point numbers support:
[https://github.com/golang/go/issues/19787](https://github.com/golang/go/issues/19787)

------
Osiris
> My summary: if you are doing some financial math that does not need to be
> accurate to the penny, just use floating point numbers.

So... Don't use floats where accuracy is important? I agree.

------
zAy0LfpBZLC8mAC
> 0.1 + 0.2: Produces 0.30000000000000004

No, it doesn't. It produces
0.3000000000000000444089209850062616169452667236328125.

> 1.40 * 165: produces 230.99999999999997

No, it doesn't. It produces 230.999999999999971578290569595992565155029296875.

> round(2.675, 2) produces 2.67

No, it doesn't. It produces
2.6699999999999999289457264239899814128875732421875.

> This is because 2.675 is actually 2.6749999999999998

No, it isn't. For one, 2.675 obviously is not 2.6749999999999998, but it also
isn't converted to 2.6749999999999998 to be stored as a floating point number,
it is converted to 2.67499999999999982236431605997495353221893310546875.

> round(2.665, 2) which produces 2.67

No, it doesn't. It produces
2.6699999999999999289457264239899814128875732421875.

> However, in floating-point numbers, it is above the halfway point
> (0.00500000000000000097)

Well, it is above the halfway point, but it's actually
0.0050000000000000009714451465470119728706777095794677734375.

> I am not a theoretician and have not proven that this is actually correct.

So, all the numbers are wrong, you have no clue whether your wrong results
generalize, but you use that as the basis for advice to other people? I only
can hope that you don't write software for other people to use.

------
allard
Isn't it just the difference between measuring and counting?

There's no occurrence of the word count here
[https://en.m.wikipedia.org/wiki/Accuracy_and_precision](https://en.m.wikipedia.org/wiki/Accuracy_and_precision).

We can count things and know that number exactly. We cannot measure anything
to exactness.

------
scott-smith_us
Another consideration:

We've all probably read that “Programs must be written for people to read, and
only incidentally for machines to execute.”

When another (maintenance) developer sees float/double being used to represent
money, their first reaction is going to be "uh oh, this dev didn't know what
s/he was doing". There's a big risk of someone reflexively refactoring this
later on.

You may ask, "What if I put in big comment blocks saying essentially 'I knew
what I was doing here'?"

Well, even if maintenance devs read that and believe you, they're still likely
to introduce errors, since they aren't used to using floats for money, and
don't know all the tricks to avoid errors...

Unless there's some justifiable win in size / performance, this (although
interesting) falls under my "Don't be too clever" rule.

------
ivanhoe
>We can solve these problems by rounding after every operation.

In my experience one should do exactly the opposite, round/format only the
very last result when you need to print it, otherwise always keep all the
decimals in the intermediary results to avoid accumulating rounding errors as
in a Superman 3 bank hack (aka "salami slicing"). I did a lot of affiliate and
e-commerce software in my time and if you're careful you can use floats
(sometimes you have to deal with old systems working that way), but fixed-
point calculations with integers are way easier and safer. Just make sure to
store enough extra decimal places, not just the 2 that you print and you can
calculate interests, exchange currencies, charge 0.5% fee on $0.01 per click
transactions, whatever - it works just fine.

------
erpellan
A lot of financial firms use an integer of the smallest indivisible unit of
currency wherever possible (stripe is one). This has the handy property that
string and numeric representations are identical so no risk of being flipped
into scientific notation by some layer. That happens more often than you’d
think.

FX is a different world. Spot FX trading systems are fully automated and
compute rates to 5 DP. That was 15 years ago, I don’t know if it’s more now.
Large Yen amounts can come worryingly close to the limits of precision.

I once watched a colleague trying to persuade a roomful of people to stop
using floats. The demo showed what happens to 0.1 + 0.1 + 0.1. The answer was
not 0.3. I’m not sure they believed him. Some of them are probably still
writing software for your bank!

~~~
mrunkel
For shits and giggles:

    
    
        $ python3
        Python 3.7.4 (default, Sep  7 2019, 18:27:02)
        [Clang 10.0.1 (clang-1001.0.46.4)] on darwin
        Type "help", "copyright", "credits" or "license" for more information.
        >>> print (0.1 + 0.1 + 0.1)
        0.30000000000000004
    
        $ irb
        2.5.1 :001 > print 0.1 + 0.1 + 0.1
        0.30000000000000004 => nil
    
        Chrome browser console:
        > 0.1 + 0.1 + 0.1
        < 0.30000000000000004
    
        $ php
        <?php
        echo 0.1 + 0.1 + 0.1;
        0.3

~~~
samatman
Is this because PHP is correct, or is it because PHP rounds reprs at a
different level of precision?

------
bluedino
I once worked at a place that had a third party vendor create a reporting
front-end to the outdated minicomputer that the business ran on. After I was
there a couple years, the CFO said "It's really weird, the numbers we get out
of the reporting system are close, but never exactly right when we compare
them by hand to the system."

This person was seriously thinking "this is just how computers are", and after
a day of looking into it I was able to determine all the issues they were
facing were floating point problems when the Windows front-end did it's
calculations. It was pulling all the data directly from the old system, after
all. How could it be different?

------
gdm85
Aside from rounding I have 2 reasons to not use them where (financial)
precision is necessary:

\- [https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER) (it's
the same in other languages)

\- if results are not exactly reversible then you cannot implement
verification and that hinders defensive programming

------
thiagoharry
Floating point numbers are numbers in scientific notation for computers. I
don't see people without computers doing finance calculations in scientific
notation. It's not natural and error happens because of this.

Floating point numbers were not created to represent numbers which could be
non-integers. They were created to represent numbers in scientific notation.
Very useful for physics and scientific calculations, but not in finance
domains.

------
gok
> A 64-bit floating-point number can represent 15 decimal digits, which is all
> balances less than 10 trillion (9 999 999 999 999.99), with two digits after
> the decimal place

There are real monetary quantities for which this is insufficient. The GDP of
Indonesia is measured in 10s of quadrillions of Rupiah. In cases of
hyperinflation this kind of overflow can happen even for human-level
accounting.

Use real fixed point decimal math.

------
tjalfi
Others have mentioned most of the pitfalls with the suggested approach.

Another potential issue is that the floating point rounding mode can be
modified by shared libraries or plugins.

[0] has examples of application crashes caused by a change in the rounding
mode.

[0]
[http://www.virtualdub.org/blog/pivot/entry.php?id=53](http://www.virtualdub.org/blog/pivot/entry.php?id=53)

------
every
The hidden "gotcha" in most spreadsheets is that formatting cells to currency
or 2 decimals changes only the display, not the number itself. To do that and
eliminate potential floating point errors, wrapping EVERY currency formula in
one of the round functions is standard practice for bookkeeping...

------
ourmandave
We've already had this argument on stackoverflow _10 years ago._

[https://stackoverflow.com/questions/582797/should-you-
choose...](https://stackoverflow.com/questions/582797/should-you-choose-the-
money-or-decimalx-y-datatypes-in-sql-server)

------
protomyth
No, please don't. Often you have to combine money with quantities that also
need unit of measure conversions and this just is one step too far. Frankly,
chasing the error goes from a programming problem to a contract problem very
quickly.

Also, you are going to end up storing in decimal fields in the database
anyway.

~~~
dionian
> Also, you are going to end up storing in decimal fields in the database
> anyway.

why not use cents as the unit and store it without decimals?

~~~
protomyth
Because it is a massive pain in the butt for your report writers / business
intelligence people whose tools expect decimal fields. Never mind all the
existing code that expects an actual decimal and not an integer. Plus tax
folks like like their mills.

~~~
Supermancho
> Because it is a massive pain in the butt for your report writers / business
> intelligence people whose tools expect decimal fields

That's a matter for the ETL that dumps the data on their screens.

~~~
protomyth
Yeah, no. First, sometimes you actually don't get to use a ETL before doing
the report, and massaging an integer or float to decimal in an ETL is just
going to cause you trouble.

------
hanthar
$ python3 Python 3.7.3 (default, Jun 16 2019, 16:10:46) [Clang 8.0.0
(clang-800.0.42.1)] on darwin Type "help", "copyright", "credits" or "license"
for more information. >>> 2.675 == 2.6749999999999998 True

------
camgunz
Yeah isn't this just a real bad idea? There's no 0.1 in half/single/double
precision arithmetic for one. Is "speed of financial calculations" really the
bottleneck for these apps? I kind of doubt it.

------
nabla9
Large financial calculations where errors accumulate are more often than not
done in database queries, not in the application software.

What is faster in Postgresql? Double floats and TRUNC after each operation or
using NUMERIC(precision, scale)?

~~~
_wzsf
hey bro, where's your scholarly record of impactful publications?

------
rubicks
I worked for a large financal services institution. They cared about the
vagaries of floating-point money only slightly more than they cared about
numerical stability --- which is to say not at all.

------
eplanit
"if you are doing some financial math that does not need to be accurate to the
penny, just use floating point numbers. If it is good enough for Excel, it
will be good enough for most applications".

Uh, no.

------
WalterBright
I just use longs with the value representing the number of pennies.

------
DonHopkins
You might get unexpectedly negative financial results if you don't use floored
division and modulo properly:

[https://stackoverflow.com/questions/4467539/javascript-
modul...](https://stackoverflow.com/questions/4467539/javascript-modulo-gives-
a-negative-result-for-negative-numbers)

>JavaScript % (modulo) gives a negative result for negative numbers

>Question: According to Google Calculator (-13) % 64 is 51. According to
Javascript (see this JSBin) it is -13. How do I fix this?

>Solution: ((i % n) + n) % n

The Forth-83 standard word /MOD (or FM/MOD) implemented floored division
properly, subtly breaking many old FORTH programs, but it was a step (or
rather a truncation) in the right direction, towards negative infinity instead
of zero.

[https://www.nimblemachines.com/symmetric-division-
considered...](https://www.nimblemachines.com/symmetric-division-considered-
harmful/)

>Symmetric division considered harmful

>Since its 1983 standard (Forth-83), Forth has implemented floored division as
standard. Interestingly, almost all processor architectures natively implement
symmetric division.

>What is the difference between the two types? In floored division, the
quotient is truncated toward minus infinity (remember, this is integer
division we’re talking about). In symmetric division, the quotient is
truncated toward zero, which means that depending on the sign of the dividend,
the quotient can be truncated in different directions. This is the source of
its evil.

[https://forth-standard.org/standard/core/FMDivMOD](https://forth-
standard.org/standard/core/FMDivMOD)

>Rationale:

>By introducing the requirement for "floored" division, Forth 83 produced much
controversy and concern on the part of those who preferred the more common
practice followed in other languages of implementing division according to the
behavior of the host CPU, which is most often symmetric (rounded toward zero).
In attempting to find a compromise position, this standard provides primitives
for both common varieties, floored and symmetric (see SM/REM). FM/MOD is the
floored version.

>The committee considered providing two complete sets of explicitly named
division operators, and declined to do so on the grounds that this would
unduly enlarge and complicate the standard. Instead, implementors may define
the normal division words in terms of either FM/MOD or SM/REM providing they
document their choice. People wishing to have explicitly named sets of
operators are encouraged to do so. FM/MOD may be used, for example, to define:

    
    
        : /_MOD ( n1 n2 -- n3 n4) >R S>D R> FM/MOD ;
    
        : /_ ( n1 n2 -- n3) /_MOD SWAP DROP ;
    
        : _MOD ( n1 n2 -- n3) /_MOD DROP ;
    
        : */_MOD ( n1 n2 n3 -- n4 n5) >R M* R> FM/MOD ;
    
        : */_ ( n1 n2 n3 -- n4 ) */_MOD SWAP DROP ;

