When you're doing floating point arithmetic on a computer, it will approximate and round certain values in ways that don't match the way humans do it when they're, e.g. doing accounting.
So you need to run a massive physics simulation really fast? Yes, floats are great.
You need to calculate taxes on a massive corporation's fiscal year? Bad idea.
Some libraries advertise "arbitrary precision", many computer systems have a "decimal" type intended for currency, etc. and then they won't make all the same mistakes, but as the OP said you still need to control rounding rules and make sure they match the law.
> You need to calculate taxes on a massive corporation's fiscal year? Bad idea.
That depends on whether the hundred-billion-dollar corporation cares about being off by a dollar.
And by "off" I mean "different from how humans round", not necessarily further away from an infinite-precision calculation. In fact at "massive corporation" level I would guess that binary floating point is more accurate than a typical fractional penny system.
it's not so much how much it is off but that it's off at all. If the numbers don't add up then they don't add up. If there's any kind of difference then it has to be found and accounted for and it becomes a needle in a haystack search to account for the difference. Think about trying to find $0.05 spread across hundreds of thousands of transactions due to rounding issues.
Every single publicly listed company, every single one of them, is off when it comes to calculating their taxes by way more than just a dollar. And I don't mean clever accounting tricks or tax avoidance schemes, I just mean in terms of actual mistakes being made.
If they could just pay the dollar and never have to worry about it again, sure. But the point is for them to have confidence that the math is unimpeachable and identical to whatever auditor or tax official would compute at every step of the way so you don't just have to guess at correctness with some waving of hands.
Surprisingly common values like 0.1 don't have a precise representation in binary for most formats, including standard floating point number formats. See https://0.30000000000000004.com/ for more detail than you can shake a stick at.
Also if the local tax code states using 5 decimal places for intermediate values when you will introduce “errors” using formats that give greater precision as well as those that give less precision. Having worked on mortgage and pension calculations I can state that the (very) small errors seen at individual steps because of this can balloon significantly through repeated calculations.
Furthermore, the name floating point gives away the other issue. Floating point numbers are accurate to a given number of significant figures not decimal places. For large numbers any decimal places you have in the result are at best an estimate, and as above any rounding errors at each stage can compound into a much larger error by the end of a calculation.
IEEE standard floating point uses a binary mantissa.
And binary has trouble representing fractions that are common in prices:
$ bc
obase=2
scale=20
1/5
1/5 in binary is a repeating binary fraction: 0.0011001100110011...
Just as you can't express 1/3 or 1/7 precisely as a non-repeating decimal fraction, you can't express 1/5 and 1/10 as a non-repeating binary fraction. As a result, most prices involving cents in currency cannot be expressed precisely as binary floating point numbers.
The biggest issue is you now need programmers who know about epsilon computation and error propagation when working with incorrect numbers. Then you need to know when to fudge the visual representation of your incorrect number (and you probably also need to understand when your programming language / libraries do fudge the output for you).
FP numbers have their use but they re better reserved for scientists doing actually scientific stuff and not just to represent what are actually tiny numbers (in the grand scheme of things) and which can be represented perfectly by other means.
If there's an applicable law or regulation that says "you must do x", and you do y (and that yields different results), you'll get into trouble, even if your way yields "better" or "more accurate" results.
This is not to say that using floats and rounding correctly necessarily does yield different results, by the way (although most likely it will) – but if they do differ, you're going to have a bad time using floats.
Floating point calculations without some final rounding step before presentation/export/storage are almost always wrong, since you're implying much more precision than is justified by your source data.
You can represent 0.3 as 0.300000…0004, which rounds to 0.3 again in the end.
But you need to reason about the number and nature of intermediate operations, which is tricky, since errors usually accumulate and don’t always cancel out.
> since errors usually accumulate and don’t always cancel out.
The problem is that from the system's perspective, these aren't "errors". 0.3000000....4 is a perfectly valid value. It's just not the value that you want. But the computer doesn't know what you want.
> The problem is that from the system's perspective, these aren't "errors".
When I say "error" here I mean the mathematical term, i.e. numerical error, from error analysis, not "error" as in "an erroneous result".
There is a formalism for measuring this type of error and making sure it does not exceed your desired precision.
> It's just not the value that you want.
My point is exactly that if you're looking at 0.300000...4, you aren't done with your calculation yet. If you stop there and show that value to a user somewhere (or are blindly casting it to a decimal or arbitrary precision type), you are using IEEE 754 wrong.
You know that your input values have a precision of only one or two sub-decimal digits, in this example, so considering more than ten digits of precision of your output is wrong. You have to round!
It's the same type of error that newspapers sometimes make when they say "the damage is estimated to be on the order of $100 million (€93.819 million)".
Yes, this is often more complicated and error-prone (the human kind this time) than just using decimals or integers, and sometimes it will outright not work (since it's not precise enough – which your error analysis should tell you!)! But that doesn't mean that IEEE 754 is somehow inherently not suitable for this type of task.
As a practical example, Bitcoin was (according to at least one source) designed with floating point precision and error analysis in mind, i.e. by limiting the domain of possible values so that it fits into double-length IEEE 754 floating point values losslessly – not because it's necessarily a good idea to do Bitcoin arithmetics using floating point numbers, but to put bounds on the resulting errors if somebody does it anyway: That's applied error analysis :)
If you just add up the errors, sure. What is riskier is that you risk tipping values the wrong direction right before applying a rounding step, or end up with an error right before multiplying a now wrong per-unit value with some large-ish factor.
Often these things are not a big problem on their own, but then later gets compounded because someone does something stupid like passing these imprecise values around to be distorted further all over the place.
And sometimes the reason it doesn't become a legal problem turns out to be because your finance department quietly works their way around it by expending expensive manpower accounting for discrepancies that shouldn't be there in the first place, and so increases the cost to the business by many magnitudes over the loss the developers might have assumed to be the worst case (if they're aware of the discrepancy at all).
This is one of those things you can get away with many times, many places, with no ill effects. But when it finally bites you it can get expensive and/or really bad to deal with, and it's fixed by simply never doing money calculations on datatypes with imprecise arithmetic, and having a five minute conversation with your finance team about what your local rules for rounding tax amounts are.
In accounting, no, while preparing input to the accounting in the form of generating invoices, I've lost count (sorry) of the number of times I've seen people doing tax calculations etc. on unit prices and then multiplying by number of units ordered, and then further compounding potential issues by adding up these numbers from multiple invoice lines. None of which is usually the right thing to do, all of which you often "get away with" without causing sufficient discrepancies, and so which people often fail to catch in testing. Until you suddenly don't.
I think the historical interpretation is also relevant. The systems that did accounting before digital computers used base 10, so the first computerized systems for accounting used base 10 also. This legacy extends to the point that mainframes often had (and I believe still have) special decimal floating point math instructions. There have been several ways to accomplish this BCD (binary coded decimal) where numbers are stored in base 10 using a 4 bit encoding. I believe this can be arbitrary precision, but don’t have any experience myself. Some hardware also has decimal32 and decimal64 floating point hardware, which is part of recent versions of the ieee754 spec[1]. Databases also often have a DECIMAL type for doing calculations on money values [2]. So I think it’s not just that laws say it should be a certain way, but also that it is important to maintain consistency between systems over time.
Floats lose precision unexpectedly with certain fractions that are perfectly representable in decimal, and also with certain integers once you get high enough.
The standard in ad-tech (not sure about banking) is to use int64s representing either microdollars or microcents, so a max capacity of 9.3*10^13 or 10^11 dollars
floats are an imperfect representation of real numbers and as such, there are an infinite count of real numbers that cannot be accurately represented with floats (and doubles).
It gets even worse when you start doing calculations on floats/doubles.
These inaccuracies are ok for a lot of things. graphics often uses floats and the errors are small enough they don't matter.
But currency absolutely needs to be accurate, and for that reason, floats/doubles are in appropriate.
That would have been my default assumption