

Ahah, Apple does have different math - profquail
http://www.nntp.perl.org/group/perl.perl5.porters/2009/03/msg144965.html

======
ajross
This isn't a new problem. The 80-bit internal 8087 FPU precision has always
been a mismatch for the 64 bit IEEE double representation, even before the
presence of SSE registers (which don't have the 80 bit mode) complicated
things. Intel 8087 code has always been able to produce different results for
the same source code, depending on when/whether/which intermediate results get
spilled to memory. The Motorola 68k FPU had the same issue with higher
internal than external precision, IIRC.

This isn't a bug. Both code paths produce results of the highest representable
precision of the hardware in question. It's just that there are multiple
hardware units capable of giving you the answer, and Apple's toolchain picks a
different one than whatever Tom is using elsewhere.

And as has been pointed out in FooBarWidget's comment -- any code that relies
on bit-precise results from floating point computation is almost certainly
concealing precision bugs anyway. That's not the right way to approach
floating point architecture.

~~~
ars
For people unfamiliar with this, "spilled to memory" means rounded to 64 bits
(from 80).

x87 does floating point math with 80 bits (in registers). But the variables
are stored in memory locations with 64 bits. So the results are rounded.

The problem comes from _when_ to do the rounding. And that can vary depending
on if a register is needed for something else, optimizations, the order of
code, etc.

------
profquail
Tell PG: When I submitted this article, HN stripped the "!" characters out of
the title...but I was then able to edit the story and put them back in there.
I suppose that means there's a small bug in there somewhere...

~~~
notauser
You can also use Unicode in comments, which may or ʎɐɯ ʇou be a bug but did
surprise me.

~~~
sailormoon
Why on earth would the ability to use unicode in comments be a bug? Seems like
a basic feature of any modern site.

------
FooBarWidget
It has been long known that one cannot rely on floating point for precise
calculations, e.g. monetary values. One should use integers instead or some
kind of other precise representation. But it does surprise me that even on the
same CPU architecture you cannot expect every machine to produce the same
results.

~~~
profquail
I concentrated in optimization and numerical analysis for my undergrad degree
in Math, and I'm still surprised on a regular basis how many programmers out
there _don't_ know about the inaccuracies and errors that come out of
floating-point representations.

~~~
brettnak
As did, I. I am a professional programmer now and didn't go through the CS
school ( To all sophomores out there: We do exist. ) Do they really not teach
this sort of thing in a formal CS school?

~~~
wtallis
Most CS undergraduate programs in the US suck enough these days that numerical
analysis isn't taught at the undergrad level. For example, at my school,
undergraduates get to choose between the automata/grammars/computability class
and a "numerical methods" class that is really watered down. There is a two-
semester undergraduate numerical analysis class, but as I recall it can't
count toward your degree except as a general elective. To get numerical
analysis to count as a CS elective, you have to take the graduate level class.

------
modeless
What Every Computer Scientist Should Know About Floating-Point Arithmetic:
<http://docs.sun.com/source/806-3568/ncg_goldberg.html>

------
tentonova2
This shouldn't surprise anyone and I can't say I understand the breathless
post. For more information, I recommend Write Great Code: Understanding the
Machine -- Chapter 4, Floating-Point Representation.

------
9oliYQjP
Apple doesn't use the x87 floating point unit. They've never needed to because
they can rely on the SSE2 unit since any Intel Mac ever created has one. This
isn't necessarily the case on the PC even though every modern one will have it
as well. I'm pretty sure gcc is set to compile floating point numbers using a
SSE2 code-path by default on OS X.

~~~
ssp
One pedantic correction: SSE is the 4-wide floating point instruction set.
SSE2 added integer instructions using the same registers.

------
jcl
This reminds me of a page on fast and accurate geometric primitives; the code
requires that you to flip the Intel FPU into a _reduced precision mode_ to
make the error easier to estimate.

<http://www.cs.cmu.edu/~quake/robust.pc.html>

------
TallGuyShort
Wasn't a very similar phenomenon a huge blow to Intel's PR some years back?

~~~
profquail
Sort of, but that had to do with a bug in the actual hardware of the Pentium
Pro chip:

<http://en.wikipedia.org/wiki/Pentium_FDIV_bug>

------
wendroid
On plan9

    
    
        Trying compiler constants first...
    
        f  is 0.12345, rounded to 0.1235, expanded to 0.123450003564357760000000000000
        d  is 0.12345, rounded to 0.1235, expanded to 0.123450000000000000000000000000
        ld is 0.12345, rounded to 0.1235, expanded to 0.123450000000000000000000000000
    
        Now trying derived values...
    
        f  is 0.12345, rounded to 0.1235, expanded to 0.123450003564357760000000000000
        d  is 0.12345, rounded to 0.1235, expanded to 0.123450000000000000000000000000
        ld is 0.12345, rounded to 0.1235, expanded to 0.123450000000000000000000000000
    

So don't blame the architecture

