Hacker News new | past | comments | ask | show | jobs | submit login

And that roughly captures the spot where I was seeing doubles used.

Yes, they could have used fixed point. I am guessing that what happened is that someone who had thought way more deeply about this than I ever needed to (I worked on the accounting side, where, yep, we always used decimals) either determined that, where the modeling was concerned, floating point errors were not worth worrying about, or estimated that the expected cost to the company stemming from bugs due to to fixed point math being easier to goof up on would have been smaller than the expected cost to the company due to floating point error.






To see 0.1 error using _double_ you have to do at least 2*10^17 operations (assuming the worst case scenario and no subnormals).

If you are working with such huge numbers, 0.1 cents is probably a cost you are willing to pay to avoid expending thousands in a software solution. The saving with power saving using a floating point is likely greater than power your computers will have to expend to get a precise solution.


You can get a larger error than that using one operation.

  fn main() {
      let x: f64 = 9007199254740992.0;
      assert_eq!(x + 1.0, x);
  }

You are absolutely right.

When adding numbers with large magnitudes differences (around 10^17 I think) it might exceed the format precision. I should have taken that in account when defining the error boundaries.

In dollars, you start having issues with cents when working with a tens of trillions.

For the vast majority of people this won't be an issue.


I can give you 1.0 error. Take a handful of numbers that add up to 1.5, sum them, and then round that result to the nearest unit.

I'm too lazy to figure out a specific example, but sets of numbers where doubles round up and decimals round down (or vice versa) aren't terribly uncommon.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: