Isn't it an expected behaviour (at least for programmers) because of IEEE 754?
For example, in Racket this also gives 2.0, but if you are using exact numbers (#e), then it gives exact and correct answer 1.0.
Same for Java (and any other language): if you are using Float/Double numbers, then you are loosing precision. Use BigDecimal (or something similar for other languages) if you want exact calculations.
If there's a problem here, it's that large integers are being treated as floating-point numbers with insufficient precision to represent them properly. With the exception of Perl 6 (?!), programming languages all seem to agree that putting a period in a number gives them a license to return answers that don't agree with normal arithmetic. If you think about it, that's an odd thing to be unanimous about.
Also, BigDecimal doesn't give "exact calculations" unless you stick to numbers with finite decimal expansions. BigDecimal can't do 1/3 without rounding.
UPD: But interesting question though: if language X gives correct answer for that example (without using special classes and functions), does that mean that language X is not following IEEE 754 and does something else under the hood, hence will be much slower?
Take, for example, Java: no one is using BigInteger/BigDecimal by default because they are waaay much slower.
Some dynamically typed languages use hybrid approaches - e.g. Ruby has Fixnum and Bignum for integers - but you're correct that additional precision isn't free.
I don't know how much slower, but in theory such hybrid approaches can allow the common case to be just as fast or fast enough (tm) while still returning the correct results for the uncommon case (depending on how much the runtime or compiler can prove about the range of your numbers to enable the fast path.)
Kind confused what the point is. Is this a complaint that most languages use IEEE 754 for non-integer numbers and he thinks they shouldn't, or a veiled dig about how many programmers don't know this, or...?
The color coding of results suggests the author thinks that 2 is wrong and 1 is right, but he's going out of his way to specify floating point numbers, and when subtracting those two floating point numbers the correct answer is 1 and NOT 2.
Eg, Ruby thinks 9999999999999999.0 - 9999999999999998.0 = 2, but 9999999999999999 - 9999999999999998 = 1. Which is...correct. Right? Unless you don't think IEE 754 should be the default?
I feel like the author is trying to make a clever point, but if so, I'm not getting it.
Note that the Common Lisp example could be abbreviated to:
(- 9999999999999999.0l0 9999999999999998.0l0)
However, this will only work (either way) in Lisps where long floats are arbitrary precision, like CLISP. In most Lisps (e.g. SBCL, CCL), long floats are just double floats.
9999999999999999.0 can't be represented as a double. Depending on the rounding mode, it could become 9999999999999998.0, so when you subtract them, they're zero. That happens when the rounding mode is round toward zero, or round down. Any other mode rounds it up to 1e+16, so you get an answer of 2.
In [1]: from decimal import Decimal as D
In [2]: D('9999999999999999.0') - D('9999999999999998.0')
Out[2]: Decimal('1.0')
From docs: Decimal βis based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle β computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.β
For example, in Racket this also gives 2.0, but if you are using exact numbers (#e), then it gives exact and correct answer 1.0.
Same for Java (and any other language): if you are using Float/Double numbers, then you are loosing precision. Use BigDecimal (or something similar for other languages) if you want exact calculations.