
9999999999999999.0 – 9999999999999998.0 - lelf
http://geocar.sdf1.org/numbers.html
======
twtw
I don't understand all the crap that IEEE 754 gets. I appreciate that it may
be surprising that 0.1 + 0.2 != 0.3 at first, or that many people are not
educated about floating point, but I don't understand the people who
"understand" floating point and continue to criticize it for the 0.1 + 0.2
"problem."

The fact is that IEEE 754 is an exceptionally good way to approximate the
reals in computers with a minimum number of problems or surprises. People who
don't appreciate this should try to do math in fixed point to gain some
insight into how little you have to think about doing math in floating point.

This isn't to say there aren't issues with IEEE 754 - of course there are.
Catastrophic cancellation and friends are not fun, and there are some
criticisms to be made with how FP exceptions are usually exposed, but these
are pretty small problems considering the problem is to fit the reals into
64/32/16 bits and have fast math.

~~~
svat
> considering the problem is to fit the reals into 64/32/16 bits and have fast
> math

Floating-point numbers (and IEEE-754 in particular) are a good solution to
this problem, but is it the right problem?

I think the "minimum of surprises" part isn't true. Many programmers develop
incorrect mental models when starting to program, and get no feedback to
correct them until much later (when they get surprised).

It is true that for the problem you mentioned, IEEE 754 is a good tradeoff
(though Gustafson has some interesting ideas with “unums”:
[https://web.stanford.edu/class/ee380/Abstracts/170201-slides...](https://web.stanford.edu/class/ee380/Abstracts/170201-slides.pdf)
/ [http://johngustafson.net/unums.html](http://johngustafson.net/unums.html) /
[https://en.wikipedia.org/w/index.php?title=Unum_(number_form...](https://en.wikipedia.org/w/index.php?title=Unum_\(number_format\)&oldid=873507492)
). But many programmers do not realize how they are approximating, and the
"fixed number of bits" may not be a strict requirement in many cases. (For
example, languages that have arbitrary precision integers by default don't
seem to suffer for it overall, relative to those that have 32-bit or 64-bit
integers.)

Even without moving away from the IEEE-754 standard, there are ways languages
could be designed to minimize surprises. A couple of crazy ideas: Imagine if
typing the literal 0.1 into a program gave an error or warning saying it
cannot be represented exactly and has been approximated to
0.100000000000000005551, and one had to type "~0.1" or "nearest(0.1)" or add
something at the top of the program to suppress such errors/warnings. At a
very slight cost, one gives more feedback to the user to either fix their
mental model or switch to a more appropriate type for their application.
Similarly if the default print/to-string on a float showed ranges (e.g.
printing the single-precision float corresponding to 0.1, namely
0.100000001490116119385, would show "between 0.09999999776482582 and
0.10000000521540642" or whatever) and one had to do an extra step or add
something to the top of the program to get the shortest approximation ("0.1").

~~~
tasty_freeze
When I went to university in 1982, one of the lower level courses was called
"Numerical Methods". It went over all of the issues related to precision,
stability, as well as a host of common numerical integration and approximation
methods.

I'm just a sample size of one, but isn't this kind of class a requirement for
CS majors?

~~~
sampo
> isn't this kind of class a requirement for CS majors?

I think most CS departments dropped numerical analysis from their requirements
by the end of 1980s. Nowadays you are more likely to find such a course in
some dusty corner of math or engineering departments.

~~~
vlovich123
My university moved it from a 100 series course to a 200 series course but
it's still being taught to ECE undergrads.

The problem is more that we don't have the tools to track and understand how
the errors are altered as we do the math (e.g. how would you even begin to try
representing catastrophic cancellation at compile-time) and doing the
numerical error analysis on the abstract math itself is hard once the math
gets complex let alone trying to figure it out after you've optimized the code
for performance & tweaked the algorithms for real-world data/discrete space.

Now perhaps it could be possible to do it at runtime in some way but I suspect
the performance of that is prohibitive to the point where arbitrary precision
math or decimal numbers is going to be a better solution.

------
svat
A useful website for these that I ran across recently:
[https://float.exposed/](https://float.exposed/)

For example, entering 9999999999999999.0 into "double" gives
[https://float.exposed/0x4341c37937e08000](https://float.exposed/0x4341c37937e08000)
and entering 9999999999999998.0 gives
[https://float.exposed/0x4341c37937e07fff](https://float.exposed/0x4341c37937e07fff)

My wishlist for such a page would contain two additional features:

1\. Allow entering expressions like "a OP b == c", so that one can enter "0.1
+ 0.2 == 0.3" or "9999999999999999.0 - 9999999999999998.0 == 1.0" and see the
terms on the left-hand side and right-hand side.

2\. Show for each float the explicit actual range of real numbers that will be
represented by that float. For example, show that every real number in the
range [9999999999999999, 10000000000000001] is represented by
10000000000000000, and that every real number in the range (9999999999999997,
9999999999999999) is represented by 9999999999999998.

The author of this one has a blog post about it:
[https://ciechanow.ski/exposing-floating-
point/](https://ciechanow.ski/exposing-floating-point/) and I also like a
shorter (unrelated) page that nicely explains the tradeoffs involved in
floating-point representations and the IEEE 754 standard, by usefully starting
with an 8-bit format:
[http://www.toves.org/books/float/](http://www.toves.org/books/float/)

~~~
DougBTX
> 9999999999999999.0 into "double" gives
> [https://float.exposed/0x4341c37937e08000](https://float.exposed/0x4341c37937e08000)

Nice that it reformats the input to "10000000000000000.0", gets the point
across that a 64 bit double float just doesn't have enough bits to exactly
represent 9999999999999999.0, but that it does happen to be able to represent
9999999999999998.0.

~~~
loeg
An easy rule of thumb is each 3 decimal digits takes 10 bits to represent.
9999999999999999 is 16 (= 15 + 1) decimal digits. And 3 bits can only
represent 0-7. So you need more than 3 bits for that final decimal digit. So,
50 + 4 bits.

IEEE 754 64-bit floats have 53 significant bits ("mantissa").

------
al2o3cr
The arithmetic is correct - the problem is that "9999999999999999.0" _isn 't_
representable exactly.

9999999999999998.0 in IEEE754 is 0x4341C37937E07FFF

"9999999999999999.0" in IEEE754 is 0x4341C37937E08000 - the significand is
exactly one higher.

With an exponent of 53, the ULP is _2_ \- so parsing "9999999999999999.0"
returns 1.0E16 because it's the next representable number.

    
    
        Using one of these workarounds requires a certain prescience of the
        data domain, so they were not generally considered for the table above.
    

Doing arithmetic reliably with fixed-precision arithmetic always requires
understanding of the data domain. If you need arbitrary precision, you'll need
to pay the overhead costs of arbitrary-precision: either by opting-in by using
the right library, or by default in languages like Perl6 and Wolfram.

------
alanfranz
What is the "right answer"? Is the article claiming that such languages don't
respect IEEE-754, or that IEEE-754 is shit?

If you want arbitrary precision, use an arbitrary precision datatype. If you
use fixed precision, you'll need to know how those floats work.

Pointless article, imho.

~~~
mabbo
The point is to illustrate a simple fact that most of us know- but maybe some
don't.

[https://m.xkcd.com/1053/](https://m.xkcd.com/1053/)

~~~
twtw
It doesn't even illustrate that particularly well. As is, the page just seems
to be pointing at floating point and yelling "wrong", with no information on
what's actually happening.

By all means embrace the surprise and educate today's 10,000, by why not
actually explain why these are reasonable answers and the mechanics here
behind the scenes?

------
misterdoubt
_Using one of these workarounds requires a certain prescience of the data
domain_

I'm a little concerned if merely knowing the existence of floating point
arithmetic constitutes "prescience."

------
seanalltogether
Are there any mainstream languages that consider a decimal number to be a
primitive type? I feel like floating point numbers are far less meaningful in
every day programs. Even 2d graphics would be easier with decimal numbers.
Unless you're using numbers that scale from very small to very large, like 3d
games or scientific calculations, you don't actually want to use floating
point.

~~~
twtw
Julia has built in rationals (as do a few other languages).

I'm not aware of any language (other than Wolfram) that defaults to storing
something like 0.1 as 1/10 - i.e. uses the decimal constant notation for
rationals, rather than having some secondary syntax or library.

~~~
kccqzy
Even in Wolfram, 0.1 is not the same as 1/10.

    
    
        In[1]:= Precision[0.1]
    
        Out[1]= MachinePrecision
    
        In[2]:= Precision[1/10]
    
        Out[2]= \[Infinity]

~~~
twtw
Ah, thanks for the correction.

I don't currently have a license, so out of curiosity is 0.3 == 3/10 in
wolfram?

~~~
kccqzy
Yes that's true.

According to the documentation of Equal†,

> Approximate numbers with machine precision or higher are considered equal if
> they differ in at most their last seven binary digits (roughly their last
> two decimal digits).

Which is why in Mathematica, 0.1+0.2==0.3 is also True.

If you need a kind of equality comparison that returns False for 0.3 and 3/10,
use SameQ. Funnily, SameQ[0.1+0.2,0.3] is also True, because SameQ allows two
machine precision numbers to differ in their last binary digit.

†:
[https://reference.wolfram.com/language/ref/Equal.html](https://reference.wolfram.com/language/ref/Equal.html)

------
garethrees
The linked post is a bit poorly expressed, but I think there is a good point
there: fixed-size binary floating-point numbers are a compromise, and they are
a poor compromise for some applications, and difficult to use reliably without
knowing about numerical analysis. (For example, suppose you have an array of
floating-point numbers and you want to add them up, getting the closest
representable approximation to the true sum. This is a very simple problem and
ought to have a very simple solution, but with floating-point numbers it does
not [1].)

Perhaps it is time for the developers of new programming languages to consider
using a different approach to representing approximations to real numbers, for
example something like the General Decimal Arithmetic Specification [2], and
to relegate fixed-size binary floating-point numbers to a library for use by
experts.

There is an analogy with integers: historically, languages like C provided
fixed-size binary integers with wrap-around or undefined behaviour on
overflow, but with experience we recognise that these are a poor compromise,
responsible for many bugs, and suitable only for careful use by experts.
Modern languages with arbitrary-precision integers are much easier to write
reliable programs in.

[1]
[https://en.wikipedia.org/wiki/Kahan_summation_algorithm](https://en.wikipedia.org/wiki/Kahan_summation_algorithm)
[2]
[http://speleotrove.com/decimal/decarith.html](http://speleotrove.com/decimal/decarith.html)

~~~
MauranKilom
Do note that UB on integer overflow is (at least nowadays) more of a compiler
wish for optimization reasons than it is technically necessary (your CPU will
indeed just wrap around if you don't live in the 80s anymore, but a C++
compiler might have assumed that won't happen for a signed loop index).

------
f2f
Also worth checking:
[http://0.30000000000000004.com/](http://0.30000000000000004.com/)

most popular previous discussion:
[https://news.ycombinator.com/item?id=10558871](https://news.ycombinator.com/item?id=10558871)

------
chaitanya
There's an easier way to specify long floats in Common Lisp: use the exponent
marker "L" e.g. 9999999999999999.0L0. No need to bind or set reader variables.

That said, even in Common Lisp I think its only CLISP (among the free
implementations) that gives the correct answer for long floats.

CLISP:

    
    
        [1]> (- 9999999999999999.0L0 9999999999999998.0L0)
        1.0L0
    

SBCL, CMUCL and Clozure CL:

    
    
        * (- 9999999999999999.0L0 9999999999999998.0L0)
        2.0d0
    

The standard only mandates a minimum precision of 50 bits for both double and
long floats, so there's no guarantee that using long floats will give the
correct answer, as we can see.

[http://www.lispworks.com/documentation/HyperSpec/Body/t_shor...](http://www.lispworks.com/documentation/HyperSpec/Body/t_short_.htm)

------
mark-r
Is floating point math broken?

[https://stackoverflow.com/questions/588004/is-floating-
point...](https://stackoverflow.com/questions/588004/is-floating-point-math-
broken)

No, it's just that a lot of people don't understand its limitations.

------
mbostock
It’s nice that JavaScript has arbitrary-precision integers now.
9999999999999999n - 9999999999999998n === 1n

------
dsalaj
Google calculator gives answer 0 where as duckduckgo calculator answers with
2. xD

~~~
azhenley
Bing gives 1 though!

~~~
sampo
So duckduckgo uses the normal 64bit floating point, and the clever people at
bing automatically switch to bignums when needed. But I have no idea what
google does to get that 0?

~~~
azhenley
I think Google is truncating the numbers. You get 0 even if you do:

9999999999999999 – 9999999999999990

~~~
sampo

        9999999999999999 - 9999999999999971 ==  0
        9999999999999999 - 9999999999999970 == 30
        9999999999999999 - 9999999999999969 == 32
        9999999999999999 - 9999999999999966 == 34

------
nly
This is particularly sucky to solve in C and C++ because you don't get
arbitrary precision literals.

    
    
        #include <boost/multiprecision/cpp_dec_float.hpp>
        #include <boost/lexical_cast.hpp>
        #include <iostream>
    
        using fl50 = boost::multiprecision::cpp_dec_float_50;
    
        int main() {
            auto a = boost::lexical_cast<fl50>("9999999999999999.7");
            auto b = boost::lexical_cast<fl50>("9999999999999998.5");
            std::cout << (a - b) << "\n";
        }
    

works

    
    
        int main() {
            fl50 a = 9999999999999999.7;
            fl50 b = 9999999999999998.5;
            std::cout << (a - b) << "\n";
        }
    

doesn't, even if you change fl50 out for a quad precision binary float type.

~~~
svat
> Even user-defined literals in C++11 and later don't let you express custom
> floating point expressions

Note that in your code sample you're not actually using user-defined literals
([https://en.cppreference.com/w/cpp/language/user_literal](https://en.cppreference.com/w/cpp/language/user_literal)).
This works (based on on your earlier code sample and adding user-defined
literals):

    
    
        #include <boost/multiprecision/cpp_dec_float.hpp>
        #include <boost/lexical_cast.hpp>
        #include <iostream>
        using fl50 = boost::multiprecision::cpp_dec_float_50;
        fl50 operator"" _w(const char* s) { return boost::lexical_cast<fl50>(s); }
        int main() {
            fl50 a = 9999999999999999.7_w;
            fl50 b = 9999999999999998.5_w;
            std::cout << (a - b) << "\n";
        }

~~~
nly
Thanks, it's nice to be wrong! For some reason I had it in my head that you
couldn't get the token as a char const* for floating point expressions...

------
rg3
Note in C you can get the correct result if you use long doubles, which
normally go to 80 bits[1]:

printf("%Lf\n", 9999999999999999.0L - 9999999999999998.0L);

In my x86_64 computer it breaks when you add enough digits. At this point it
started outputting 0.0 as the difference:

printf("%Lf\n", 99999999999999999999.0L - 99999999999999999998.0L);

With 63 bits for the fraction part you more or less get around 19 decimal
digits of precision, and the expression above uses 20 significant digits.

[1]
[https://en.wikipedia.org/wiki/Extended_precision#x86_extende...](https://en.wikipedia.org/wiki/Extended_precision#x86_extended_precision_format)

------
thanatos_dem
Interestingly, SQLite gets it wrong, returning 2.0, but MySQL, MariaDB,
Postgres, and Cockroach all get it right at 1.0

I guess this comes down to most of them having implementations of arbitrary
precision decimals.

~~~
zeroimpl
In PostgreSQL, if you specify a decimal literal, it is assumed to be type
NUMERIC (arbitrary precision) by default, as opposed to FLOAT or DOUBLE
PRECISION.

If you stored your values in table rows as DOUBLE PRECISION, you would of
course get the wrong answer.

------
tzury
With python, I get 2 even when using Decimals.

    
    
        Python 2.7.3 (default, Oct 26 2016, 21:01:49)
        [GCC 4.6.3] on linux2
        Type "help", "copyright", "credits" or "license" for more 
        information.
        >>> from decimal import *
        >>> getcontext().prec
        28
        >>> a=Decimal(9999999999999999.0)
        >>> b=Decimal(9999999999999998.0)
        >>> a-b
        Decimal('2')
    

That is unexpected.

~~~
minitech
The issue is that 9999999999999999.0 == 10000000000000000.0. You need to pass
a string: Decimal('9999999999999999.0')

------
rscho
There is interesting ongoing research on representing exact reals:
[https://youtu.be/pMDoNfKXYZg](https://youtu.be/pMDoNfKXYZg)

~~~
Veedrac
Quick summary of the talk:

A specialist number representation is made for exact representation of values
in geometric calculations (think CAD). Numbers are represented as sums of
rational multiples of cos(iπ/2n).

Exact summation, multiplication and division (not shown) of these quantities
are possible, and certain edge-cases (eg. sqrt) have special-case handling.

The system was integrated into and tested on an existing codebase.

The speaker was also one of the authors of Herbie, if other people remember
that.

------
gpm
2 with 64 bit floats, 0 with 32 bit floats.

~~~
sp332
I could understand 0, but how does it get 2?

~~~
codeflo
FP numbers are (roughly) stored in the form m×2^e (m = mantissa, e =
exponent). When numbers can't be represented exactly, m is rounded. My guess
is that these numbers end up being encoded as 4999999999999999×2 and
4999999999999999.5×2, where the latter is rounded up to 5000000000000000×2.

------
preinheimer
$ php -v

PHP 7.2.10 (cli) (built: Oct 9 2018 14:56:43) ( NTS )

$ php -r "echo 9999999999999999.0 - 9999999999999998.0;"

2

$ php -r "echo bcsub('9999999999999999.0', '9999999999999998.0', 1);"

1.0

bcmath -
[http://php.net/manual/en/function.bcsub.php](http://php.net/manual/en/function.bcsub.php)

~~~
mikey_p
Result for all versions of PHP:
[https://3v4l.org/JYlrp](https://3v4l.org/JYlrp)

~~~
mikey_p
And with bcmath: [https://3v4l.org/AmOQt](https://3v4l.org/AmOQt)

------
Steve44
The accounting software we use has a built in calculator which has a similar
problem.

5.55 * 1.5 = 8.3249999999999....

26.93 * 3 = 80.7899999999999....

I raised it with the supplier some time ago, they said it's just the
calculator app and the main program isn't affected. Quite shocking that they
are happy to leave it like this.

------
combatentropy
Pros and cons of the different ways computers can work with fractions:
[https://softwareengineering.stackexchange.com/a/167166/10934...](https://softwareengineering.stackexchange.com/a/167166/109343)

------
spencerwgreene
The author says, "That Go uses arbitrary-precision for constant expressions
seems dangerous to me."

Why?

My thoughts: 1) more inefficient programs because encountering an arbitrary-
precision expression requires arbitrarily large memory and computation, 2)
more complicated language implementation.

~~~
banthar
Constant expressions are evaluated at compile time. Compilation would suffer
any eventual performance penalties. This probably makes the compiler simpler -
no need to implement different arithmetic for different types & no need to
guess the types.

The dangerous bit is, that just extracting a variable from constant
expressions might change the result slightly. That should not be a problem,
unless you are depending on exact values.

------
lacey
While it is not the default literal type in Haskell, you can use coercion and
the Scientific type to compute an (almost) arbitrary precision result. For
example this prints 1.0 in the repl:

import Data.Scientific

(9999999999999999.0 :: Scientific) - (9999999999999998.0 :: Scientific)

------
gravypod
Java also has BigDecimals for this kind of work.

------
guggle
How do Perl, Wolfram ans Soup get the "right" (ahaha...) answer ? (I'm not
familiar with these langages)

Of course for others it should "fixable" where needed:

    
    
      ~ python3
      Python 3.6.7 (default, Oct 22 2018, 11:32:17) 
      [GCC 8.2.0] on linux
      Type "help", "copyright", "credits" or "license" for more information.
      >>> from decimal import Decimal
      >>> Decimal('9999999999999999.0') - Decimal('9999999999999998.0')
      Decimal('1.0')

------
Tempest1981
Do any of the languages mentioned give a compiler warning? To help educate?

~~~
PeterisP
IMHO it's unfeasible, because the exact same situation as with
9999999999999999.0 (a literal that's impossible to represent accurately as
double and will get rounded to something else) applies also to very common
cases such as 0.1 (which can't have an exact binary representation at all) -
adding a compiler warning for that will mean that the warning will trigger for
pretty much every floating point literal.

~~~
acjohnson55
I don't think that's quite true. For 0.1,the algorithm for printing back the
number will still result in 0.1, but the same is not true for
9999999999999999.0, which comes back 1e16. So compilers could easily warn for
this specific situation, where what I'd call the round trip value of the
literal is broken.

But I don't think such a warning would be all that helpful. How often do we
use literals with 16 significant digits, expecting exact representation?

The bigger gotcha here is catastrophic cancelation. This is the issue of an
insignificant rounding error becoming much more significant due to subtracting
of very nearly equal numbers. You can't generally detect this at compile time
if you don't know all your numbers in advance (e.g. you're not working with
only literals).

~~~
jmiserez
You can do abstract interpretion and calculate rounding precision at each
line. The problem is that as soon as you have loops, you'll pretty much get
that warning everywhere. Being sure that there is no cancellation is even
harder, but possible in some cases. I'm sure there are better approaches,
there's tons of research papers in that area.

------
Aardwolf
> Several of the results surprised me. Did they surprise you?

Well, the perl6 result surprised me, since that means it's using something
more precise than double precision floating point :)

~~~
_kst_
It surprised me too, since it's not what I got.

    
    
        $ perl6 --version
        This is Rakudo version 2018.03 built on MoarVM version 2018.03
        implementing Perl 6.c.
        $ perl6 -e 'print 9999999999999999.0-9999999999999998.0;print "\n";'
        2
        $
    

(Incidentally, I would have used "say" rather than "print" with an explicit
newline.)

~~~
lizmat
Indeed, there was a bug with determining when to switch to floats, that was
fixed by Zoffix in August:

    
    
        https://github.com/rakudo/rakudo/commit/fec1bd74f97e257d4c88673cd62fdcae39f587a3

------
chris_mc
Well, one answer is to use IEEE 1788 interval arithmetic, which will at least
give you IEEE 754 high and low bounds for a calculation, rather than one
answer that's clearly wrong. Otherwise, some inaccuracy is the trade-off for
fast floating point calculations.

------
crankylinuxuser
So.. Can someone better versed in the ways of system level programming tell me
why we still use IEEE 754 exponential notation?

Iv'e seen article after article of how "horrible it is". So, are there default
libs to use Binary Coded Decimal (BCD) or something like that?

~~~
TazeTSchnitzel
We use it because it's fast, accurate and very useful for many applications.
It's not horrible.

~~~
chowells
In fact, as discussed in another thread, it's the optimal representation for
many purposes. The only problems are that some languages over-privilege them
to the point where it's difficult to use alternatives, and some programmers
don't understand them.

------
ddtaylor
Interestingly `bc` as well as `bash expr` on the command line give the correct
result.

~~~
LeoPanthera
"bc - An arbitrary precision calculator language"

bc is doing its job!

------
elpakal
Welcome to Apple Swift version 4.2.1 (swiftlang-1000.11.42
clang-1000.11.45.1). Type :help for assistance.

    
    
      1> let foo = 9999999999999999.0 - 9999999999999998.0
    

foo: Double = 2

------
NextHendrix
Actually Wolfram|Alpha does give 1, but Mathematica 11.3 gives 2.

~~~
rexpress
Mathematica interprets the real numbers 9999999999999999.0 and
9999999999999998.0 as having machine precision. To work in arbitrary
precision, you need a backtick after the number, followed by the number of
significant digits.

In this case,

9999999999999999.0`17-9999999999999998.0`17 does indeed return 1.

[https://reference.wolfram.com/language/ref/Precision.html](https://reference.wolfram.com/language/ref/Precision.html)

------
rurban
In perl5 it's easier:

    
    
        perl -Mbignum -e'print 9999999999999999.0 - 9999999999999998.0'
        1
    

The BigFloat solution on the website is suboptimal.

~~~
lizmat
In Perl 6 even shorter than that:

    
    
        perl6 -e'print 9999999999999999.0 - 9999999999999998.0'
        1

------
buchanae
[https://play.golang.org/p/_EkSHUIrg1y](https://play.golang.org/p/_EkSHUIrg1y)

~~~
buchanae
But,
[https://play.golang.org/p/naE55o3_xFP](https://play.golang.org/p/naE55o3_xFP)

I guess the first link is converting the float constants to ints at compile-
time?

(edit: oh, it's actually mentioned in the article. I should read more
carefully)

~~~
dan-robertson
In gcc, there is software emulation for hardware floating point arithmetic so
that compile time constants may be evaluated for any target architecture (even
if the compiling architecture does not support that format). It seems go
approximates this as “just evaluate with a high precision then convert to
float” which is probably mostly fine but having arithmetic be different
between compile time and run time seems likely to be not fun.

------
Dylan16807
Not that this is specifically a "floating point" problem.

Plenty of languages are going to get upset if you add 2 billion to 2 billion.

------
carterschonwald
its good to educate people about default representations of numerical literals
and their corner cases.

Whenever I see these examples I do get annoyed st the Haskell one because we
never are told what type it gets defaulted to, which only happens silently in
the ghci repl, but will trigger a warning if it’s in a source file that’s
being compiled.

------
x0
sqlite> SELECT 9999999999999998.0 - 9999999999999999.0; -2.0

MariaDB [(none)]> SELECT 9999999999999998.0 - 9999999999999999.0; -1.0

------
consp
I don't get it. Why is the author complaining? He requests 17 digits of
accuracy which is not something you should be using floats for in any form.
Just import/link a package which can do arbitrary precision arithmetic of your
choice and pay the overhead price.

------
issam01
Pascal/Delphi gives the right answer:

writeln(FloatToStr(9999999999999999.0 - 9999999999999998.0));

1

------
jdhzzz
Groovy 2.4.5 gets it right:

println(9999999999999999.0-9999999999999998.0) 1.0

------
zelon88
Just tried it in PHP and was given an answer of 2.

------
adrianhel
The default should have been arbitrary precision bignums with easy opt-in for
floats and ints of varying sizes.

But oh well.

------
Sawamara
Yeah, one more article shitting on IEEE-754, yaay.

------
8bitsrule
For some reason, this -very old- situation doesn't seem to trouble enough
quadrillionaire coders.

Perhaps inflation will fix it. Or the first quadrillionaire AI.

------
loeg
Well, duh. One of those numbers isn't accurately representable in IEEE 754
64-bit "double" values, which is likely what all of the failing languages use.

If you want more than 53 significant bits, you need something wider than a
64-bit IEEE double. And anyone who cares about such precision knows this and
will expect to use a higher precision library. (Or is an incredibly
specialized math language, like Wolfram.)

If you think having only 53 significant bits is bad, you should see how many
bits common trigonometry operations lose in libm implementations.

Another fun example: video games (historically) have thrown away even 53
significant bits for the higher performing 32-bit floats.

