Hacker News new | past | comments | ask | show | jobs | submit login

This is false. It's not correct to handle currency with floating point types.





I don't see any problem with it if it's decimal. Here's an accepted answer on stack overflow with hundreds of upvotes recommending the use of `decimal` to store currency amounts in C#. That's a decimal floating point type.

https://stackoverflow.com/a/693376/44743


They said floating point decimal types which probably means BCD.

There are different implementations, and BCD is only one of them. Another popular one is a mantissa and exponent, but the exponent is for a 10-based shift rather than the typical floating point.


Tbey mean radix-10 floating point, as compared to the radix-2 floating point you are thinking of. The packing of the decimal fractional digits in the significand of a radix-10 FP number need not be in BCD, it can use other encodings (e.g., DPD or something else).

0.3 is exactly representable in radix-10 floating point but not radix-2 FP (would be rounded to a maximum of 0.5 ulp error as seen in the title), for instance, just as 1/3 = 0.3333... is exactly representable in radix-3 floating point but neither radix-2 or radix-10 FP, etc.


Right, it is not correct. But many programs do it wrong. If you just do a couple of additions the problem will never be noticed. It's easy to write a program that sums up 0.01 until the result is not equal to n * 0.01. Not at my computer now, so I can't do it again. I remember n was bigger to be relevant for any supermarket cashier. But of course applications exist where it matters.

But it is correct.

> It's easy to write a program that sums up 0.01 until the result is not equal to n * 0.01.

It's not easy to do that if you use a floating point decimal type, like I recommended. For instance, using C#'s decimal, that will take you somewhere in the neighborhood of 10 to the 26 iterations. With a binary floating point number, it's less than 10.


Of course with a decimal type there is no rounding issue. That's not what 0.30000000000000004 is about.

Many languages have no decimal support built in or at least it is not the default type. With a binary type the rounding becomes already visible after 10959 additions of 1 cent.

  #include <stdbool.h>
  #include <stdio.h>
  #include <string.h>
  
  bool compare(int cents, float sum) {
    char buf[20], floatbuf[24];
    int len;
    bool result;
    
    len = sprintf(buf, "%d", cents / 100 ) ;
    sprintf(buf + len , ".%02d" , cents % 100 ) ;
    sprintf(floatbuf, "%0.2f", sum) ;
  
    result = ! strcmp(buf, floatbuf) ;
    if (! result)
      printf( "Cents: %d, exact: %s, calculated %s\n", cents, buf, floatbuf) ;
    return result;
  }
  
  int main() {
    float cent = 0.01f, sum = 0.0f;
  
    for (int i=0 ; compare(i, sum) ; i++) {
      sum += cent;
    }
    return 0;
  }
Result:

  Cents: 10959, exact: 109.59, calculated 109.60
This is on my 64 bit Intel, Linux, gcc, glibc. But I guess most machines use IEEE floating point these days so it should not vary a lot.

That is simply not true. The C# decimal type doesn't accumulate errors when adding, unless you exceed its ~28 digits of precision. E.g. see here: https://rextester.com/RMHNNF58645

> unless you exceed its ~28 digits of precision

Precisely. That's why I specified ~ 10^26 addition operations.


It's not correct, but it happens anyway, even in large ERP systems that really should know better but somehow don't.

It is correct! Using decimal types is the widely recommended way of solving this problem. That includes fixed and floating point types. The problem is using base-2 floating point types, since those are subject to the kinds of rounding errors in the OP. But decimal floating point types are not subject to these kinds of rounding errors.

But they still can't precisely represent quantities like 1/3 or pi.


> Using decimal types is the widely recommended way of solving this problem.

No, it's not. The widely recommended way of solving this problem is to use fixed-point numbers. Or, if one's language/platform does not support fixed-point numbers, then the widely recommended way of solving this problem is to emulate fixed-point numbers with integers.

There is zero legitimate reason to use floating-point numbers in this context, regardless of whether those numbers are in base-2 or base-10 or base-pi or whatever. The absolute smallest unit of currency any (US) financial institution is ever likely to use is the mill (one tenth of a cent), and you can represent 9,223,372,036,854,775,807 of them in a 64-bit signed integer. That's more than $9 quadrillion, which is 121-ish times the current gross world product; if you're really at the point where you need to represent such massive amounts of money (and/or do arithmetic on them), then you can probably afford to design and fabricate your own 128-bit computer to do those calculations instead of even shoehorning it onto a 64-bit CPU, let alone resorting to floating-point.

Regardless of all that, my actual point (pun intended) is that there are plenty of big ERP systems (e.g. NetSuite) that use binary floating point numbers for monetary values, and that's phenomenally bad.


It's not correct, but in many cases it's plenty accurate

If you are dealing with other people’s money, the only accurate is accurate. Close enough should not be in any financial engineer’s mindset, imho.

In this case, it's both. Decimal floating point types do not lose precision with base-10 numbers, unless using trig, square roots, arbitrary division and the like.

> arbitrary division

Like commonly happens doing financial calculations, especially doing interest calculations.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: