for(i = 0.0; i < 1.0; i += 0.1)
edit: My first Clojure program was the last time this got me--LISP on the outside, Java floats/ints/bigints on the inside.
After all, every damn fool knows--even ones like myself who've implemented FP in assembly--that 0.1 + 0.1 + 0.1 = 0.30000000000000004.
Yes. Because you clearly don't understand what you're doing at all if you believe that equivalence relation is defined the same way for floating-points as it is defined for integers.
Floating-points aren't numbers. They're more like confidence intervals - and at a certain limit a confidence interval behaves just like an integer. But it's only a limit.
The fact that most users of programming languages don't want to keep track of that in their code only reinforces my belief that they must do exactly as the software guidelines tell them to.
In most SQLs, decimal fixednum types are predominant. Most LISPs have a number tower that automatically juggles fixnums, bignums, ratios, floats, and complex numbers. In the 8/16-bit days--before everything had a floating-point unit--it was not uncommon to store and calculate numbers directly in BCD. Two-digits to the byte, and all of the operations--including formatting--were very straightforward to implement. Many of the old business mainframes even have hardware instructions to pack/unpack/operate on BCD.
> that most users of programming languages don't want to keep track of that...
Why would I want to keep track of that when the language (or more accurately, the data type) will do it for me?
Original example was about steps, not termination. With FP, each tick label has to be post-formatted with a precision specifier to get the desired behavior. With BCD, casting to a string is sufficient (and super-simple to implement).