Hacker News new | past | comments | ask | show | jobs | submit login

Maybe. It would get rid of some issues but might make people complacent about other issues.

Even then it probably wouldn't be BCD. Too inefficient. The digit-packing versions of IEEE decimals use 10 bits each for blocks of 3 digits. 99.7% efficiency rather than 83% efficiency.




Oh hey, I had no idea about this, thanks! I wrote a short essay about this idea in July ("BCM: binary-coded milial") but I didn't know it was already widely implemented, much less standardized by the IEEE! Do they use 1000s-complement for negative numbers? How does the carry-detection logic work?

I also thought about excess-12 BCM, which simplifies negation considerably, but complicates multiplication.


There's a sign bit, just like binary floating point. The details about how to do the math are up to the implementer, but I'm sure any bignum algorithm would work fine.

https://en.wikipedia.org/wiki/Decimal_floating_point

I can't say I'm a huge fan of the standard defining two separate encodings, one that uses a single binary field, and one that uses a series of 10-bit fields. There's no way to distinguish them either.


I do remember an ON (original nerd) mentioning tracking down a problem (blown nand gate) with a floating point unit. Computer worked fine, passed all the tests but the accounting department was complaining their numbers were coming out wrong.

Problem was with calculating decimal floats which only the accounting department programs used because they used BCD for money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: