Currency handling is almost never done with rationals (numerator and denominator) and is frequently (and correctly so!) done with fixed or floating point decimal types.

 I develop accounting software for banks, brokerage houses and likes.Currency, taxes, rebates, etc. handling is NEVER done with floating point.Whatever you do with money you need predictable, reproducible results. It is norm that calculations are checked by software at two companies on both sides of transaction. Any discrepancies are alarms, bug reports, unhappy customers.Every significant operation is exactly specified with rounding rules, etc.For card payments and especially on terminals usually BCD is used.For everything else usually some kind of arbitrary length decimal library (BigInteger, BigDecimal).
 > Currency, taxes, rebates, etc. handling is NEVER done with floating point.Nonsense. I’ve seen real banking code at reputable banks that uses floats.> Whatever you do with money you need predictable, reproducible results.Floats aren’t random. They’re perfectly deterministic, predictable and reproducible. If you do the same operation in two places you get the same result.
 I write real banking code. There is definitely a banking code that uses floats, e.g. valuation of financial instruments. The parent comment talks about software that does transactions and “simpler” calculations, like taxes and fees etc.When people talk about non-determinism of floating point, what they usually mean is non-associativity, that is (x+y)+z may not be exactly equal to x+(y+z).
 > When people talk about non-determinism of floating point, what they usually mean is non-associativity, that is (x+y)+z may not be exactly equal to x+(y+z).Good example of this, in Python 3:`````` >>> (0.1 + 0.2) + 0.3 0.6000000000000001 >>> 0.1 + (0.2 + 0.3) 0.6``````
 Every single time you run those two statements, you’ll get the same result. Yes they're non-associative. But that's specified and documented. That's not the same thing as non-deterministic in any way.
 Yet, in accounting, you are expected to be able to sum a set of numbers in different ways and still get the same result
 Yes, sorry, I was just intending to highlight non-associativity :) I agree it's not "non-deterministic".
 The same code might be optimised in different ways by different compilers, though (or the same compiler with different flags). This might lead to different results for the same code. In that sense, it's non-deterministic.
 > The same code might be optimised in different ways by different compilers, thoughIt's not an optimisation if it changes the result! And if you use non-standard flags that's your problem.
 What is and what is not optimization and what changes are allowed or not depends on the application.MP3 is an optimization of WAV, yet it changes the result.Some applications are ok with reducing precision of calculations because they are not sensitive enough to small inaccuracies or they take effort to control inaccuarcies.For example, graphics applications are typically heavy in FP calculations and yet they tend to not care much about precision and much more about performance. For those applications reducing accuracy for slight performance increase is likely win.
 > Floats aren’t random. They’re perfectly deterministic, predictable and reproducible. If you do the same operation in two places you get the same result.That's not exactly true in real hardware, or at least it wasn't until ~10 years ago. With the x87 FPU, internal precision was 80 bits, while the x86 registers were at most 64 bits. So, depending on the way the program would transfer data between the CPU and FPU your could get different results. It is very likely that different compilers and different optimization decisions could change the way these operations were implemented, so you would get slight differences between different versions of the software.There are/were also several global FP flags that could get changed by other programs running on the same CPU/FPU that could impact the result of calculations. So, if you want 100% reproducible FP, you would have to either audit all software running on the same machine to ensure it doesn't touch those flags, or set the flags yourself for every FP calculation in your your program.
 In a language like Java, all these factors are specified and fully deterministic.
 False. Floating-point arithmetic in Java is generally nondeterministic. You will notice that the strictfp keyword exists and is off by default.
 It's not false - strictfp mandates deterministic FP. If you use that your program will always run all floating point calculations in exactly the same way, full stop.Secondly, on mainstream implementations, strictfp is already documented the same as default! They're planning to remove it anyway as it's a no-op in almost all cases.See JEP 306.
 > It's not false - strictfp mandates deterministic FP. If you use that your program will always run all floating point calculations in exactly the same way, full stop.If you use it. Which is not the default. Your original claim remains false.
 It does not matter. When you are doing accounting you are supposed to be able to sum large collections of numbers and get the same result regardless of the order.That's something FP does not provide and it makes it completely unusable for accounting.
 > regardless of the order.That seems like a completely arbitrary requirement. Do accounting laws prohibit sort? Does 1 + 1 have to equal green on Tuesdays?
 It seems you have no idea what double-side accounting is.Each operation is accounted on two opposite sides of various account in a way that always keeps sides balanced (ie. they must sum up to the same value).When you go to your bank account, for example, you have various sums on both sides of your account. Yet when you sum them up they MUST agree or you will be crying blood and suing your bank.
 True, that's a good point. I was thinking of C & C++, but you're right, newer languages do a much better job of specifying and controlling this behavior.Wonder if JS does something similar or not.
 All major C/C++ compilers implement IEEE754. If you are telling the compiler to disregard it, that is on you.
 It's not about IEEE754, it's about the precision that the FP co-processor offers. The results you get are correct per IEEE754, it's just that they may have even less error than required by IEEE754 in some cases. But, this is enough to make the results non-deterministic between different compilation options.Also, changes applied to the FP co-processor by other processes on the machine could impact your process, regardless of your own compilation settings.
 Are you talking about x87?That's ancient history. Compilers don't use that instruction set any more in normal operation.GCC, Java, LLVM, etc, will normally emit SSE2 in order to be standards compliant. They will only relax this if you tell them to, then it's your problem.
 Yes, I was explicitly talking about the x87, and did mention that it has stopped being relevant for at least 10 years.I believe there is still quite a bit of cautionary discussion of floating point numbers that was written in the age of the x87, so it's important to understand that people were not just misunderstanding IEEE754, even though their concerns are no longer applicable to modern hardware.
 I did not say floats are random. But when you do accounting you need to be able to sum large sets of numbers and compare results with another sum of different numbers and the sum must match. This just does not work with FP.Poor souls that use FP for accounting are scourge of the industry and source of jokes.
 That's what I used to think, then I met these banking types, and they told me 'no we understand their semantics and we use them correctly and we know it is safe for our programs.' These teams have compiler experts on them - they aren't ignorant.
 I started working on accounting software in 2002 and right now work for Citi. Compiler experts in accounting? If you are doing HFT you are not doing accounting. Accounting is what happens later when all those transactions need to actually be accounted for and balance calculated
 If you rely on compiler implementations for accounting, you're already lost.For anything imprecise and scientific, doubles will normally work well.Accounting rules regarding truncation and rounding as specified, seems unaccounted for by most until they meet such stringent reqs.
 You're confusing foreign exchange conversion with accounting arithmetic. Two different things.
 This is false. It's not correct to handle currency with floating point types.
 I don't see any problem with it if it's decimal. Here's an accepted answer on stack overflow with hundreds of upvotes recommending the use of `decimal` to store currency amounts in C#. That's a decimal floating point type.
 They said floating point decimal types which probably means BCD.
 There are different implementations, and BCD is only one of them. Another popular one is a mantissa and exponent, but the exponent is for a 10-based shift rather than the typical floating point.
 IEEE 754 defines decimal floating point: https://en.wikipedia.org/wiki/Decimal_floating_point#IEEE_75...
 Tbey mean radix-10 floating point, as compared to the radix-2 floating point you are thinking of. The packing of the decimal fractional digits in the significand of a radix-10 FP number need not be in BCD, it can use other encodings (e.g., DPD or something else).0.3 is exactly representable in radix-10 floating point but not radix-2 FP (would be rounded to a maximum of 0.5 ulp error as seen in the title), for instance, just as 1/3 = 0.3333... is exactly representable in radix-3 floating point but neither radix-2 or radix-10 FP, etc.
 Right, it is not correct. But many programs do it wrong. If you just do a couple of additions the problem will never be noticed. It's easy to write a program that sums up 0.01 until the result is not equal to n * 0.01. Not at my computer now, so I can't do it again. I remember n was bigger to be relevant for any supermarket cashier. But of course applications exist where it matters.
 But it is correct.> It's easy to write a program that sums up 0.01 until the result is not equal to n * 0.01.It's not easy to do that if you use a floating point decimal type, like I recommended. For instance, using C#'s decimal, that will take you somewhere in the neighborhood of 10 to the 26 iterations. With a binary floating point number, it's less than 10.
 Of course with a decimal type there is no rounding issue. That's not what 0.30000000000000004 is about.Many languages have no decimal support built in or at least it is not the default type. With a binary type the rounding becomes already visible after 10959 additions of 1 cent.`````` #include #include #include bool compare(int cents, float sum) { char buf[20], floatbuf[24]; int len; bool result; len = sprintf(buf, "%d", cents / 100 ) ; sprintf(buf + len , ".%02d" , cents % 100 ) ; sprintf(floatbuf, "%0.2f", sum) ; result = ! strcmp(buf, floatbuf) ; if (! result) printf( "Cents: %d, exact: %s, calculated %s\n", cents, buf, floatbuf) ; return result; } int main() { float cent = 0.01f, sum = 0.0f; for (int i=0 ; compare(i, sum) ; i++) { sum += cent; } return 0; } `````` Result:`````` Cents: 10959, exact: 109.59, calculated 109.60 `````` This is on my 64 bit Intel, Linux, gcc, glibc. But I guess most machines use IEEE floating point these days so it should not vary a lot.
 That is simply not true. The C# decimal type doesn't accumulate errors when adding, unless you exceed its ~28 digits of precision. E.g. see here: https://rextester.com/RMHNNF58645
 > unless you exceed its ~28 digits of precisionPrecisely. That's why I specified ~ 10^26 addition operations.
 It's not correct, but it happens anyway, even in large ERP systems that really should know better but somehow don't.
 It is correct! Using decimal types is the widely recommended way of solving this problem. That includes fixed and floating point types. The problem is using base-2 floating point types, since those are subject to the kinds of rounding errors in the OP. But decimal floating point types are not subject to these kinds of rounding errors.But they still can't precisely represent quantities like 1/3 or pi.
 > Using decimal types is the widely recommended way of solving this problem.No, it's not. The widely recommended way of solving this problem is to use fixed-point numbers. Or, if one's language/platform does not support fixed-point numbers, then the widely recommended way of solving this problem is to emulate fixed-point numbers with integers.There is zero legitimate reason to use floating-point numbers in this context, regardless of whether those numbers are in base-2 or base-10 or base-pi or whatever. The absolute smallest unit of currency any (US) financial institution is ever likely to use is the mill (one tenth of a cent), and you can represent 9,223,372,036,854,775,807 of them in a 64-bit signed integer. That's more than \$9 quadrillion, which is 121-ish times the current gross world product; if you're really at the point where you need to represent such massive amounts of money (and/or do arithmetic on them), then you can probably afford to design and fabricate your own 128-bit computer to do those calculations instead of even shoehorning it onto a 64-bit CPU, let alone resorting to floating-point.Regardless of all that, my actual point (pun intended) is that there are plenty of big ERP systems (e.g. NetSuite) that use binary floating point numbers for monetary values, and that's phenomenally bad.
 It's not correct, but in many cases it's plenty accurate
 If you are dealing with other people’s money, the only accurate is accurate. Close enough should not be in any financial engineer’s mindset, imho.
 In this case, it's both. Decimal floating point types do not lose precision with base-10 numbers, unless using trig, square roots, arbitrary division and the like.
 > arbitrary divisionLike commonly happens doing financial calculations, especially doing interest calculations.

Search: