 Money are really best dealt with as integers, any time you'd use a non-integer number, use some fixed multiple that makes it an integer, then divide by the excess factor at the end of the calculation. For instance computing 2.15% yearly interest on a bank account might be done as follows:`````` DaysInYear = 366 InterestRate = 215 DayBalanceSum = 0 for each Day in Year DayBalanceSum += Day.Balance InterestRaw = DayBalanceSum * InterestRate InterestRaw += DaysInYear * 5000 Interest = InterestRaw / (DaysInYear * 10000) Balance += Interest `````` Balance should always be expressed in the smallest fraction of currency that we conventionally round to, like 1 yen or 1/100 dollar. Adding in half of the divisor before dividing effectively turns floor division into correctly rounded division. This is called fixed-point arithmetic:https://en.wikipedia.org/wiki/Fixed-point_arithmetic> In computing, a fixed-point number representation is a real data type for a number that has a fixed number of digits after (and sometimes also before) the radix point.> A value of a fixed-point data type is essentially an integer that is scaled by an implicit specific factor determined by the type. Yeah, though that notion tends to come with some conceptual shortcomings, like presuming a power of 10 radix. In the above code the radix is implicitly different on leap years, applying such tricks is usually not possible with a fixed point library or language construct. Sounds like fractions cleanly describe what you're saying?But that practically holds only for a reasonable amount of simple arithmetics. Fractional components tend to grow exponential for many numerical methods repeated multiple times. This can happen if you're describing money and want to apply a complex numerical method from an economics article for whatever purpose. Might be worth it but be careful not to carry ever expanding fractions in your system. This only for dealing with actual money, generally our banking systems have rounding rules that prevent the fractions from getting out of hand.If you are running an economic simulation you generally don't have to worry about rounding, the whole thing is only approximate anyway. Yup. Once worked on a big project with one of the largest US exchanges. We were migrating large OTC (over the counter) CDS (credit default swaps) contracts to standardized centralized contracts. We were testing with large contracts, millions of contracts worth trillions of dollars. I was off by a single penny and failed the test. Took a while to find, but it was due to a truncate to zero instead of a proper round. I was using a floating point type instead of a proper decimal. Dont think the language I was using had a proper decimal type at the time, though it does now, 11 years later. >Money are really best dealt with as integersI wish I could up vote you more than once. You are bang on. Search: