Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work on game engines and the problem with floats isn't on small values like 10.01 but on large ones like 400,010.01 that's when the precision wildly varies.


The issue with floats is the mental model. The best way to think about them is like a ruler with many points clustered around 0 and exponentially fewer as the magnitude grows. Don't think of it like a real value - assume that there are hardly any values represented with perfect precision. Even "normalish" numbers like 10.1 are not in the set actually. When values are converted to strings, even in debuggers sometimes, they are often rounded which throws people off further ("hey, the value is exactly 10.1 - it is right there in the debugger"). What you can count on however is that integers are represented with perfect precision up to a point (e.g. 2^53 -1 for f64).

The other "metal model" issue is that associative operations in math. Adding a + (b + c) != (a + b) + c due to rounding. This is where fp-precise vs fp-fast comes in. Let's not talk about 80 bit registers (though that used to be another thing to think about).


Lua is telling me 0.1 + 0.1 == 0.2, but 0.1 + 0.2 != 0.3. That's 64-bit precision. The issue is not with precision, but with 1/10th being a repeating decimal in binary.


Not an issue on Scheme and Common Lisp and even Forth operating directly with rationals with custom words.


Not only that but the precision loss accumulates. Multiply too many numbers with small inaccuracies and you wind up with numbers with large inaccuracies




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: