
DEC64: Decimal Floating Point - te
http://dec64.com/
======
stephencanon
If you want a default decimal floating-point type, the only defensible choice
is the decimal128 type standardized by IEEE 754. It has a fully-defined
arithmetic, specified by experts who have spent their careers thinking about
the issues involved, and is wide enough to exactly represent the US national
debt in every currency used on earth.

There are situations where other decimal floating-point types are appropriate,
but if you do not understand the tradeoffs you are making, you should be using
decimal128.

I don't agree with everything in Jens Nockert's "silly review", but he's right
about a lot of things: [http://blog.aventine.se/2014/03/09/a-silly-review-of-
dec64.h...](http://blog.aventine.se/2014/03/09/a-silly-review-of-dec64.html)

I made a few notes the last time I saw this type come up somewhere:

\- It has significantly less exponent range than IEEE decimal64 (it
effectively throws away almost three bits in order to have the exponent fit in
a byte; 2^56 is ~7.2E16, which means that it can represent some, but not all,
17-digit significands; the effective working precision is 16 digits, which
actually requires only ~53.15 bits).

\- Even if you weren’t going to use those extra bits for exponent, they could
be profitably used for other purposes.

\- Lack of infinity is a mild annoyance.

\- Rounding is biased for add/sub/mul, broken for divide.

\- It's not significantly more computationally efficient than the IEEE
formats, despite handwaving by the author.

\----

Edit: I would be remiss not to note that Intel has made available a well-
tested complete implementation of IEEE 754 decimal64 and decimal128 under
3-clause BSD:
[http://www.netlib.org/misc/intel/](http://www.netlib.org/misc/intel/)

~~~
ghewgill
Also see Intel's own page about their decimal floating point library:
[https://software.intel.com/en-us/articles/intel-decimal-
floa...](https://software.intel.com/en-us/articles/intel-decimal-floating-
point-math-library/)

I don't know how much effort Intel is allocating to this project; I sent a bug
report to the author listed on that page in 2015. The bug was acknowledged and
fixed, with a new release out in "several days", but nothing yet.

------
cornholio
The flaws raised three years ago when it was posted to HN are still relevant:

[https://news.ycombinator.com/item?id=7365812](https://news.ycombinator.com/item?id=7365812)

~~~
Veedrac
That thread is rather depressing, being mostly just kneejerk and insults. From
what I can tell, the actual criticisms are just:

1\. "rounding modes and overflow behavior are not addressed", which is
incorrect.

2\. "Where's exp(), log(), sin(), cos()?", which you can find in dec64_math.c.

3\. "There are also 255 representations of _almost all representable numbers_
", which is incorrect.

4\. "it will take around FIFTY INSTRUCTIONS TO CHECK IF TWO NUMBERS ARE EQUAL
OR NOT" in the slow case, which is probably true but is unlikely to be common
given the design of the type. It can undoubtedly be improved with effort.

5\. "The exponent bits should be the higher order bits. Otherwise, this type
breaks compatibility with existing comparator circuitry", which seems like an
odd comment given the type is not normalized.

6\. "With both fields being 2's compliment, you're wasting a bit, just to
indicate sign", which AFAICT is just false.

7\. "there's no fraction support", which I don't understand.

So the only _valid_ criticism I saw in that whole thread is #4, and even that
is only a tenth as valid as its author thought it was.

~~~
BlackFingolfin
Re 1: it is true that there is a single paragraph "discussing" rounding in
[https://github.com/douglascrockford/DEC64/blob/master/dec64....](https://github.com/douglascrockford/DEC64/blob/master/dec64.asm.html)
\-- which is not much. If there is more, I did not find it, and would
genuinely appreciate a pointer! In the meantime, though, alternative rounding
methods are missing, and those can be quite important for both scientific and
business computations. I guess they could be added. But, a lot of things
"could be done". Fact is, they were done so far. In so far, DEC64 is mostly a
proposal for people to sit down and work out an actual standard. Perhaps this
will happen one day, but it seems so far not so many people are convinced it's
worth investing efforts into that (I am also not sure if the author is
interested in feedback and collaboration? I see no indication for that
anywhere)

Re 2: these are at best toy implementations, at worst dangerous (in the sense
that they may provide wildly inaccurate results, due to convergence

Re 3: agreed. Though there is still a gargantuan number of values which have 2
or more representations, and that makes all kinds of comparisons more
complicated. Dealing with that efficiently in SW is difficult, and more so in
HW. It might be worth it if the advantages outweigh it, but at least I
personally don't see it.

Re 4: if it can be "undoubtedly improved with effort", why hasn't it been done
in several years? Sure, it may be possible, but I will keep my doubts for the
time being :-)

I agree with you on 5, 6 and 7.

~~~
Veedrac
> In the meantime, though, alternative rounding methods are missing, and those
> can be quite important for both scientific and business computations.

My inexpert understanding is that modifying rounding modes is super niche and
poorly supported by most things, so this doesn't strike me as much of a
problem. A saner replacement to rounding mode flags would just be to offer
different operations for those rare cases they are wanted.

> Dealing with that efficiently in SW is difficult, and more so in HW.

Not really; you never really need to normalize values and not doing so makes
basically everything other than comparisons cheaper. I don't see how
normalizing around every arithmetic operation would make the hardware any
simpler.

> if it can be "undoubtedly improved with effort", why hasn't it been done in
> several years?

Because this is one guy's project and it hasn't seen much (any?) use.

------
tsenart
[http://blog.aventine.se/2014/03/09/a-silly-review-of-
dec64.h...](http://blog.aventine.se/2014/03/09/a-silly-review-of-dec64.html)

~~~
84Winston
>Incidentally, binary floating point can also represent almost 16 (roughly
15.955) decimal places accurately

That's false. The author do not understand the limitations of binary floating
point.

 _0.2 is a periodic number in binary floating point_.

People need to first understand why _0.2 is a periodic number in binary
floating point_ before writing a blog post.

~~~
BlackFingolfin
I see no contradiction here, as long as the period is >= 16...

~~~
wilun
Yep, it seems people need to understand the details of base translation before
writing a comment :p

Anyway, IIRC you can round-trip 15 decimal digits correctly with an IEEE
double. Don't know if it's floor(15.955) or something else, but it's close
enough for me to consider DEC64 quite useless compared to existing, quite
well-designed and widely used implementation of FP.

------
macdice
How does this relate to
[https://en.wikipedia.org/wiki/IEEE_754](https://en.wikipedia.org/wiki/IEEE_754)
(2008 edition) which added decimal floating point in a couple of sizes
including 64 bit? It's strange to publish something in the same space without
any reference to that. It appears to be incompatible (IEEE 754 decimal64 has
53 bits of significand and 11 bits of exponent; this thing has 56 bits and 8).
How is that helpful?! Libraries and hardware supporting IEEE 754 exist. What
am I missing?

Edit: it does say "A later revision of IEEE 754 attempted to remedy this, but
the formats it recommended were so inefficient that it has not found much
acceptance." at the end. Hmm.

~~~
baobrien
I don't think hardware support for '754 in decimal mode is very common, but
the other points still stand.

~~~
zokier
Still lot more common than hw support for dec64

------
baobrien
This has got to be very slow, both in hardware and software implementations,
compared to IEEE packed decimal float.

Addition and subtraction on anything other than matched exponents is going to
need rounds of multiplication-by-10, however you implement it. Using IEEE-754
packed decimal, you only need a handful of gates per digit to unpack into BCD.

~~~
Veedrac
> both in hardware and software implementations

Software multiplication is pretty fast? BCD is comparatively hard to do in
software.

------
microcolonel
> _In modern systems, this sort of memory saving is pointless. By giving
> programmers a choice of number types, programmers are required to waste
> their time making choices that don’t matter. Even worse, making a bad choice
> can lead to a loss of accuracy or destructive bugs. This is a bad practice
> that is very deeply ingrained._

Boundless arrogance, minimal information.

If you want a decimal floating point type, use the IEEE formats, there are
high quality implementations everywhere, and if you're using C++ you can
probably switch without much more than a string replacement and a couple
header includes. Douglas Crockford can be forgiven for having apparently no
concept of computing at the limits of the machine (in terms of cache latency,
memory size, and CPU throughput), but if you get comfortable with this level
of ignorance and aren't famous, you will be highly replaceable.

Added:

Moore's law is effectively dead. This means that for a given CPU
microarchitecture family and your monetary or spatial budget for memory, you
have limited resources to achieve your goals. If you write an inefficient
program with no regard for performance at the numerics level, your program
will no longer automatically get much faster and cheaper to run, you will
instead be contributing to the pile of performance debt your successors will
be cursing and shedding tears over.

Furthermore, with his "loss of accuracy" comment, he seems to imply that his
64 bit decimal types are even remotely large enough that common users will not
lose accuracy (by which I suppose he really means _precision_ ).

------
TimMurnaghan
> DEC64 is intended to be the only number type in the next generation of
> application programming languages.

Intended by whom? A lone voice (however correct), a standards body, or an
industry consortium? This article should really carry some authorship
information.

~~~
ballenf
I believe its Douglas Crockford:

[https://github.com/douglascrockford/DEC64](https://github.com/douglascrockford/DEC64)

------
ChuckMcM
A much better discussion and design for decimal arithmetic on binary computers
was done by Mike Cowlishaw
([http://speleotrove.com/decimal/](http://speleotrove.com/decimal/)) He
shifted the entire computation chain into decimal (representing decimal digits
in a packed form as well)

------
olliej
Ugh having an explicit bit prior to the decimal was a mistake intel made in
x87 - it introduces a pile of horror as there end up being multiple
representations of the vast majority of numbers, which means you have to
normalize prior to any comparison operation, or accept the your comparisons
may be wrong

The explicit leading bit directly halves the space of addressable bits (which
is how you get space for multiple representations)

If you want insight to how awful this is, x87 has pseudo infinities, pseudo
/nans/, unnormal values, and pseudo denormals

The solution intel eventually took was to recognize that the leading bit was
useless but require it to be as though it were implicit and treat any case
where that is wrong as being invalid.

So yes you do want a format that requires normalization, because the
alternative is one that requires normalization anyway, but is also insane for
any kind of comparison, and wastes precision needlessly.

------
usr1106
Nice. It is unbelievable that none of today's widely used programming has
standard support for handling money correctly. I don't want to know how many
programs use (binary) floating point to do it.

(Disclaimer: I don't work with any financial figures and I have not checked
whether the dec64 proposal is a sound one.)

~~~
rwallace
Java and Python, for a start, have decimal number types in their standard
distributions.

~~~
sadiuasdi67f678
In Java, the java.math.BigDecimal is not a primitive data type. It also has an
awkward api due to lack of operator overloading (thanks to BigDecimal not
being a primitive data type it seems).

In Python, the story looks slightly better:

    
    
        from decimal import *
        >>> getcontext().prec = 6
        >>> Decimal(1) / Decimal(7)
        Decimal('0.142857')
    

But neither of these data types is even close to being a first class citizen
in either language.

~~~
usr1106

      .getcontext().prec = 6
    

makes no sense for money. You need 2 digits after the decimal point
constantly, whether it is 0.01 or 1000000.01

AFAIK (and I don't do this for work), after every calculation you manually
need to call quantize as in

    
    
      cents = decimal.Decimal('.01')
      money = price.quantize(cents, decimal.ROUND_HALF_UP)

~~~
the-dude
How would you handle prices for components which do have more significant
digits than 2? ( I have examples if you wish ).

~~~
usr1106
Of course more digits than 2 exists. In Germany petrol/gas has always been
priced with 3 decimals. And in B2B it's even more common. I'm talking about
98% of consumer business where the precision is full cents for each
transaction.

In general you need to know when to round and to how many digits. But my
complaint was that languages have a natural API for binary floating point,
which is useful for scientific number crunching, but rarely anything
comparable for commercial calculations.

I have not tried to design a natural API for fixed (but parameterizable)
decimal precision and the implicit rounding required. But I would be surprised
if it's impossible to come up with anything less verbose (explicit
constructors everywhere) and less error prone (forgetting to call rounding).

------
garmaine
The most important part of IEEE decimal arithmetic is the (weakened, made
optional) mandate that decimal floating point perform exact arithmetic—
deterministic cross platform results on the same inputs. This is important not
just for financial accounting which motivated this departure from binary
floating point (where error is tolerated), but for any distributed system that
needs reliable computation in the presence of heterogeneous hardware or
compiler optimizations. It is embarrassing that we live with this state of
affairs in 2018 with no way to do deterministic/ exact real number arithmetic
in programs without emulating the FPU in software.

~~~
wilun
IEEE binary floating point is deterministic. There is no such thing as
tolerated errors in compliant implementations.

Non-compliant implementations are widespread for performance reasons, but that
is a completely different story.

Also I'm not sure about GPU, but FPU are typically compliant in HW, and it is
typically the compilers that have faster approximate modes, and I think for
mainstream ones only when you explicitly enable such optimizations, which are
disabled by default.

------
saagarjha
> DEC64 is intended to be the only number type in the next generation of
> application programming languages.

Sure, but to get there it'd need to interface with the current generation of
languages which still use double.

------
hoosieree
I imagine this would be useful for applications which do a lot of conversion
from/to ieee754 to/from text... Programming languages and spreadsheet software
come to mind.

Has anyone ported a non-trivial application to use DEC64, and compared the
results?

------
qwerty456127
This ought to be built into everything so financial applications can stop
using slow software decimal types.

