Hacker News new | past | comments | ask | show | jobs | submit login

Beware that BCD, and decimal in general, accumulates roundoff error at a much higher rate than binary, if you do any inexact operations.

It is more common these days to use base-1000, instead, when you need exact decimal representations. You can fit three base-1000 "digits" in a 32-bit word, with two bits left over for sign plus any other flag you find useful. (One such use could be to make a zero in the second place indicate that the rest of the word is actually binary; then regular arithmetic works on such words.) Calculations in base-1000 are quite a lot faster than BCD.

Almost always when people think they need decimal, binary -- even binary floating-point, if the numbers are small enough -- is much, much better. Just be sure to represent everything as an integer number of the smallest unit, say pennies; and scale (*100, /100) on I/O.






"Much, much better" in what sense? Just performance?

Performance, correctness, and maintainability. The amount of code needed is very small, and uses native instructions for the work, which are pretty well-tested.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: