
DEC64: Decimal Floating Point - dmmalam
http://dec64.com/
======
MichaelGG
Why would you ever use floating point for money? You're just asking for
rounding errors if you actually use the range, aren't you?

~~~
kazinator
But the error is in the 17th place: insignificant. Before such an error
represents single digit pennies, you're looking at astronomical money.

The main problem with ordinary, IEEE 754 binary floating point for money is
that even run-of-the-mill numbers like $123.45 (nowhere near astronomical)
exhibit serious problems when added together, due to 0.45 not being
representable exactly in binary, and causing cumulative rounding errors.

This is cured by decimal floating point, which represents $123.45 exactly.

If you deal with quantities that stay within the 16 digits of precision, you
will nigh hardly have a problem.

~~~
wiz21
I use to think like that, but then there's practice over theory :

1/ When a coder thinks about writing "if a <= b", then he writes "if a <= b",
he easily forgets that he should write "if a <= b + epsilon()"

2/ When people are faced with "I have to have this number with 2 decimals",
they sometimes write a = round(a * 100)/100; forgetting that 'a' is a float
(or double), and then they completely forget that the new 'a' is just not
accurate.

3/ Issues with floats arise on simple operations such as addition. And people
tend to think that simple operations have no rounding issues.

It'd be perfectly valid to work with floating point for money; but then you
have to take care of the rounding issues much more often than with fixed point
(or BigDecimal if you're in the java world). Once you use those BigDecimal,
addition becomes safe, and "manual" rounding works as expected (when you round
to 2 decimals, you get exactly 2, not an approximation).

So, after project managing several teams of developers, I can safely say that
although F.P. can be fine, most people can't handle the mental load of
thinking about rounding issues on each math operation they do (and "most
people" includes myself). That's a human risk management issue, not a
theoritical one.

~~~
Asbostos
Epsilons are almost worse than the floating point errors they try to solve!
Now you have to work out a safe epsilon that'll always work and you create
even more edge cases. A lot of the programming I do involved numbers that the
user inputs, so I can't hard code an epsilon because they might enter 1e-10
and break it. Luckily though, a lot of algorithms and formulas can be
reformulated so they either don't do = comparisons or if they do, it doesn't
matter which case comes out when the two operands are very similar.

I think of floating point numbers as being good for representing real world
quantities - things you can measure with an instrument. It never matters if
you read 1.2999999999993 instead of 1.3 because every practical instrument is
less accurate than that anyway. If you start needing to test for equality,
that's a sign something's wrong in your design. You can never measure two
lengths with a ruler in real life and check if they're equal. So maybe you
should be using integers instead, or maybe you don't really need equality.

Your point 3 about rounding issues with addition disappears if you think of
the value as being a physical measurement. You just don't care about those
errors because they are always insignificant.

Isn't the problem with floats for money more a problem with the conventions of
accountants? By rounding everything to 2 decimal places, they're obviously
creating far more error than double precision's ~16 places. They just tolerate
their specific type of error by convention.

------
JesperRavn
I don't know if the popularity of this on HN is a case of the emperors new
clothes, or if I am really missing something, but to me this is sophomoric.
There is nothing inherently base-10 to finance or any other field. Finance
deals in _exact_ quantities but that is because it deals in exact units. E.g.
if a single stock tick is 0.1 cents, then a 0.11 cent increase is as
meaningless as a 1/30 cent increase. There is no more need to represent
arbitrary exact decimals as there is to represent arbitrary exact binary
decimals. All that finance needs is integer multiples of the relevant unit.
And presumably this is already hardcoded into their COBOL or FORTRAN code.

Nothing presented on that page made me thing that DEC64 would be useful for
anything that integer arithmetic in the right units would not be better suited
to.

~~~
tomsmeding
Totally agree! If you want to calculate with cents, or tenths of cents for
that matter, just use an integer based on that unit. You have not just 56, but
64 bits of precision, and no loss of generality in the money field.

~~~
detaro
And then you get a conversion rate with 1/1000 of a cent, and you have to
manually make sure your fixed point gets converted correctly. Or worse, you
get a conversion rate input with a variable number of digits after the decimal
point, and you have to use all of them.

For simpler cases this is fine, but for more complex ones I suspect the mental
overhead for developers would get problematic.

~~~
robert_
Why not just represent different currencies as different "types" and a
quantity of their basic unit (cents for $, pennies for £, tambala for Kwacha)
and have standard conversions between them:

    
    
        kwacha_t my_kwacha = 100000; // 1000 Kwacha
        euro_t my_euro = EU_from_kwacha(my_kwacha);
        printf("My Kwacha in Euros: %d", my_euro / 100);
    

Most of the problems discussed seem to resolve around trying to have some
universal monetary unit, which isn't possible in practice anyway. Just settle
on a universal currency for international trade (as the $ effectively is) and
convert from/to as necessary for local representation.

Of course you're going to have to select an appropriate "minimum value" of
your selected currency to suit your expected conversions but how is that
different from selecting an appropriate epsilon?

------
legulere
I find it funny how the author easily dismisses IEEE decimal floating point
because it hasn't seen huge adaptation since 2008 when it was released.

It since has been implemented in some CPUs: IBM z9 upwards, POWER 6 upwards

~~~
pgaddict
Yeah, exactly the same impression here. And it's not the case that there are
no other software implementations - for example GCC supports _Decimal32/64/128
since 4.3 ([https://gcc.gnu.org/onlinedocs/gccint/Decimal-float-
library-...](https://gcc.gnu.org/onlinedocs/gccint/Decimal-float-library-
routines.html)).

------
jensenbox
Personally, I like this a lot. What are the potential downsides?

~~~
Others
I think that the chance of this becoming prevalent is low. While it has its
merit as a representation, floating point is already "good enough" for most
computations. More importantly, it is ubiquitous.

Additionally, I don't know if there are enough compelling problems that
require decimal numbers, can't take the hit of software emulation, and are
willing to bind their performance to hardware that would take a while to be
adopted, if at all.

I could be wrong though.

~~~
iopq
Floating point is really stupid for most computations. Can't store floating
point numbers when dealing with money (have to store the number of cents or
something equally stupid so you don't tell Paypal to sell something for
$12.999999999999996 because they won't accept that price)

Can't store floating point numbers when you really need a ratio because it
won't even work after you round it unless you reconcile different settings for
precision. Decimal won't help here either.

So outside of niche scientific uses or something like graphics where you need
performance, what do you really need floating point numbers for? I haven't had
a need for floating point numbers outside of some mathematical formulae.

~~~
theseoafs
> So outside of niche scientific uses or something like graphics where you
> need performance, what do you really need floating point numbers for? I
> haven't had a need for floating point numbers outside of some mathematical
> formulae.

This is... a weird thing to say. Obviously floats are going to be most useful
for math, and not useful for much else. They're number types.

~~~
iopq
Yes, but decimal numbers would be useful in:

1\. Game score display where fractional points are possible

2\. Currency

3\. Any amount displayed to humans (like pounds of beef sold)

because decimal number correspond to what you're going to display to the user,
while binary numbers are only useful for the result they produce inside an
algorithm

~~~
theseoafs
So what is your point?

------
jhallenworld
I wrote an arbitrary precision ASCII decimal floating point library. No
conversion required at all!

[https://github.com/jhallen/joes-
sandbox/tree/master/lib/farb](https://github.com/jhallen/joes-
sandbox/tree/master/lib/farb)

(There are much faster alternatives, this was just for the fun of it).

------
ino
I like this idea for in the sense it makes life easier, like garbage
collection for example.

It's good to understand what's going on behind the scenes, how memory is
allocated and managed, and the floating point quirks and their binary nature,
but for many general applications It'd be nice to have peace of mind that (0.1
+ 0.2 == 0.3) returns true.

------
rcthompson
> nan is equal to itself.

Why?

~~~
kazinator
Because the designer is sane, and understands that A is A, no matter what A
is:

[https://en.wikipedia.org/wiki/Law_of_identity](https://en.wikipedia.org/wiki/Law_of_identity)

~~~
the_mitsuhiko
It sounds more like the designer does not understand what NaN is. The whole
point of NaN is that it propagates through all operations so making (Nan ==
Nan) == true defeats that purpose.

~~~
kazinator
> _It sounds more like the designer does not understand what NaN is. The whole
> point of NaN is that it propagates through all operations so making (Nan ==
> Nan) == true defeats that purpose._

In that case, making (NaN == NaN) == false also defeats it. false isn't NaN
either. If true or false propagates out of a function whose inputs are NaN,
NaN has failed to propagate.

~~~
joesb
> In that case, making (NaN == NaN) == false also defeats it. false isn't NaN
> either.

With that logic, you could say making (1 == 1) == true is wrong, because 1
isn't true.

> If true or false propagates out of a function whose inputs are NaN, NaN has
> failed to propagate.

He meant Nan propagates through _calculation_ operation, not _comparison_
operation. As in, result of calculating Nan + 1 is NaN.

So if you have "A = NaN" and "B = A + 1", you have the situation that both A
and B is NaN but A does not equal B.

~~~
kazinator
> _With that logic, you could say making (1 == 1) == true is wrong, because 1
> isn 't true._

1 == 1 isn't erroneous. Producing a boolean is the type of the operation:
number x number -> bool.

(bool's domain could be extended with additional "nonbool" values, which are
produced if some of the operands to a numeric comparison are not numbers.)

> _calculation operation, not comparison operation_

What's the difference? It's all computation. The comparison calculates a
boolean result.

> As in, result of calculating Nan + 1 is NaN.

No it isn't. Proof:

    
    
      NaN + 1 == NaN  // yields false
    

As a hacky workaround, have to use some special predicate to test for NaN-
ness, because the above doesn't work.

What would make sense would be to allocate a new NaN, so that it is legitimate
that NaN + 1 != NaN. Neither side is a number, but the express different,
individual ways of not being a number through their different identities. And
the "isnan(X)" predicate tests whether X is an element of the set of all NaNs.

If that's not palatable, throw exceptions: don't allow NaN + 1 (why allow
addition on "not a number"? Why even allow "not a number" to be stored into a
variable of number type, in a static language?)

