Even if you understand them... in a polyglot world, you're still going to get burned.It is so easy--especially when you're jumping from a NUMBER language like SQL or Scheme to a floating-point one like C or JavaScript--to backslide into boners like`````` for(i = 0.0; i < 1.0; i += 0.1) `````` which would be a non-boner with a more advanced number tower.edit: My first Clojure program was the last time this got me--LISP on the outside, Java floats/ints/bigints on the inside.

 Yeah, and thatâ€™s why we have int for counting.
 So if I'm marking an axis from 0 to 0.9 in 0.1 increments, I should obviously do the integer indirection dance?After all, every damn fool knows--even ones like myself who've implemented FP in assembly--that 0.1 + 0.1 + 0.1 = 0.30000000000000004.Silly me!
 >> So if I'm marking an axis from 0 to 0.9 in 0.1 increments, I should obviously do the integer indirection dance?Yes. Because you clearly don't understand what you're doing at all if you believe that equivalence relation is defined the same way for floating-points as it is defined for integers.Floating-points aren't numbers. They're more like confidence intervals - and at a certain limit a confidence interval behaves just like an integer. But it's only a limit.The fact that most users of programming languages don't want to keep track of that in their code only reinforces my belief that they must do exactly as the software guidelines tell them to.
 I know what floating point is. I've implemented it in hardware. I've implemented it in software. But you know what? Floating point is not the only way people have ever represented real numbers on the computer.In most SQLs, decimal fixednum types are predominant. Most LISPs have a number tower that automatically juggles fixnums, bignums, ratios, floats, and complex numbers. In the 8/16-bit days--before everything had a floating-point unit--it was not uncommon to store and calculate numbers directly in BCD. Two-digits to the byte, and all of the operations--including formatting--were very straightforward to implement. Many of the old business mainframes even have hardware instructions to pack/unpack/operate on BCD.> that most users of programming languages don't want to keep track of that...Why would I want to keep track of that when the language (or more accurately, the data type) will do it for me?I'm sorry man. The whole world isn't C and JavaScript (yet). That we have forced crud-app developers to resort to garbage like this[0]--in places where the performance and storage benefits of floating-point are irrelevant--is a stain on the profession.
 No need to do an integer redirection. i < 0.95 would have sufficed.
 Outside of science and engineering, I think fixed point BCD is otherwise preferable. It's better matched to the expectations we've all had since grade-school.Original example was about steps, not termination. With FP, each tick label has to be post-formatted with a precision specifier to get the desired behavior. With BCD, casting to a string is sufficient (and super-simple to implement).

Search: