Hacker News new | past | comments | ask | show | jobs | submit login

I've often used floating point for currency. What can go wrong? Actually, a number of things can go wrong when using floating point as the article pointed out. The main concern nowadays is that floating point doesn't represent values with infinite precision and consequently is not capable of exactly representing every possible value.

In the past, number formats were often a concern. The first assembly language program I ever wrote was for an IBM mainframe that required a conversion of some numbers in zoned decimal format to be converted to packed decimal format. There were actually native assembly instructions for doing calculations in these different decimal formats. The IEEE 754 standard for floating point that we use today came about in 1985. Before that, floating point might involve 60 bit values (as on CDC mainframes) or hexidecimal based floating point (as on the IBM 360/370 mainframes). Double precision was not widely used because of memory limitations. Floating point was much slower. Programming languages didn't provide good or consistent facilities for non-integer values (it was virtually impossible to predict which implicit conversions were being done by PL/1 when doing mixed calculations, COBOL and FORTRAN handled calculations wildly differently). I believe that some of the current general advice about handling financial calculations stems from considerations that made sense during these Middle Ages of programming.

Now, with double precision available everywhere and standard, specified rounding modes, and the other benefits of IEEE 754, I think it's safe to consider using floating point for currency calculations. The most widely used software for financial calculations uses floating point (Microsoft Excel).

If Excel uses floating point, why is there a widely promelgated admonition to avoid it for currency? I believe that it made sense in the Middle Ages of computing, but now is not as relevant.

While it is true that some quantities cannot be represented exactly in floating point, for example 0.01, the same is true about decimal fixed point where 1/3 cannot be represented exactly. Common financial calculations can fail to be exact no matter the number format.

Consider calculating the payments for a fixed-rate 30 year mortgage. At a 5% interest rate a $200,000 loan will have a monthly interest rate of 0.05/12 so there will be 12 * 30 payments the amount of each of these payments will be:

    200000 * (0.05/12) / (1 - (1 + (0.05/12))^(-(12*30)))
This formula cannot be calculated exactly in decimal floating point for two reasons. First, the fraction (0.05/12) is not exact in decimal and secondly, there is unlikely be be a direct way to do exponentiation of decimal values.

Some languages (like Common Lisp) support exact rational numbers. This allows exact calculations with any rational fraction, but this still doesn't allow calculations involving irrational numbers, like sqrt(2) to be represented exactly. Consider calculating the cost of a circular plate when priced in dollars per gram. This involves using pi.

Care must always be exercised when doing financial calculations if they need to match how financial institutions are doing their calculations. Reports must round fractional values using the same rounding methods (is 1.005 rounded to 1.00 or 1.01? i.e. round-up vs round-to-even). Values should usually be stored after rounding to the same units used in the currency. These problems are not caused by the inaccuracy of the 17th digit of a double precision floating point being off by one.

For further information on the kinds of considerations that need to be made take a look at the design discussions that have been documented for the beancount project [1].

[1] https://beancount.github.io/docs/31_rounding_precision_in_be...






> I've often used floating point for currency

The question is WHY? Because decimal types are barely second class citizens across the stack. Is similar with dates.

Bad defaults are bad, and requiere stupid amounts of workarounds.

And just because excel do it?

https://en.wikipedia.org/wiki/Numeric_precision_in_Microsoft...

Excel is amalgamation of surprises that are not fixed.And when you put the CORRECT results, the users demand that we give the same wrong results as excel!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: