
Glibc fixes the value of TWO - akavel
http://sourceware.org/ml/glibc-cvs/2013-q1/msg00115.html
======
akavel
After reddit
([http://www.reddit.com/r/programming/comments/17zucr/glibc_fi...](http://www.reddit.com/r/programming/comments/17zucr/glibc_fixes_the_value_of_two/)),
see there for some interesting comments.

------
no_news_is
To be fair, this was just added in the previous commit...
<http://sourceware.org/ml/glibc-cvs/2013-q1/msg00116.html>

~~~
akavel
No, it was apparently some half a month earlier
([http://www.sourceware.org/git/gitweb.cgi?p=glibc.git;a=commi...](http://www.sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=99136f82027a5d6276c94a25d8392a7b571a08a3))
- although half a month is probably not really that long too.

------
angersock
Why the hell didn't they put those numeric values into parens? Some of those
(i.e., the MONE) can evaluate improperly once preprocessing is done. >:(

~~~
viraptor
I'm having problems thinking of a scenario where this could happen with unary
"-". Could you give some example where both preprocessing results are valid,
but have different meaning?

~~~
mikeash
It's not _valid_ , but I think it's surprising that a simple (albeit somewhat
odd) expression like x-MONE would fail to compile.

~~~
angersock
See my test up on ideone--it seems that it compiles just fine, much to my
surprise.

~~~
viraptor
Not very intuitive, but easy to explain if you know the compiler stages.

Basically what happens is: b-MONE -> lexer -> (b) (-) (MONE) -> preprocessor
-> (b) (-) (-) (1) -> parser

And I think you expected: b-MONE -> preprocessor -> b--1 -> lexer -> (b) (--)
(1) -> fail

PS. yeah, preprocessor+lexer are a bit more coupled, but that's the general
idea

~~~
mikeash
I'm pretty sure the preprocessor goes first. BUT it appears to put a space in
between the two -s, at least in my testing, so the result is the same.

~~~
mrmekon
GCC explains here:

[http://gcc.gnu.org/onlinedocs/cppinternals/Token-
Spacing.htm...](http://gcc.gnu.org/onlinedocs/cppinternals/Token-Spacing.html)

I can't find anything in the C spec that says this is strictly required, but
that doesn't mean it's not in there.

~~~
mikeash
Figures, something that looks relatively simple (the C preprocessor) ends up
having a ton of complicated stuff going on underneath.

------
speeder
I figured reading from the reddit post this is related to IEEE special way of
writing two.

But why IEEE has a special way of writing two?

~~~
DannyBee
It's not a special way of writing two. It was a way of getting identical bit
patterns for IEEE floats on all platforms.

Until recently, most compilers would do host arithmetic and then convert it to
target bit pattern. You would get within epsilon equal values, but not always
the _same_ bit pattern.

Now all these compilers use multiprecision float libraries (either MPFR or
custom-written), so the bit patterns will always come out the same. As such,
there is no need for the explicit bit patterns anymore, so they just use
"2.0".

~~~
speeder
And why it was so important to ensure the bit pattern was the same?

~~~
simcop2387
Mostly for testing and the ability to reproduce the same issue/error/bug on
many different platforms. Before it could be a little hard to debug if you
compiled on one machine that had one interpretation of IEEE754 (most likely in
when to round and what direction) for releases and the devs all had a machine
that did it another way. This would mean that the values would be different
and could throw off quite a few things for simulations and other areas with
heavy floating point numbers.

~~~
fleitz
It can also wreak havok on render farms. Apparently intel and amd floating
points are different enough that you can't mix and match inside the farm.

~~~
Dylan16807
Using the same compiler with the same instruction set/ordering running on each
processor? That sounds astoundingly broken; can you find a reference?

~~~
Someone
That same compiler need not produce the same code for each target. For
example, one CPU might be so slow running instruction X that it is faster to
emulate it, while it could be faster on the other.

There also are floating instructions where CPUs do not compute all bits of the
correct result. See [http://www.zqna.net/qna/knvvzn-do-fp-operations-give-
exactly...](http://www.zqna.net/qna/knvvzn-do-fp-operations-give-exactly-the-
same-result-on-various-x86-cpus.html).

Also, not all CPUs produce the same result. In
<https://blogs.oracle.com/jag/entry/transcendental_meditation>, James Gosling
claims:

 _"As far as I know, the K5 is the only x86 family CPU that did sin/cos
accurately. AMD went back to being bit-for-bit compatibile with the old x87
behavior, assumably because too many applications broke"_

~~~
Dylan16807
What I meant is if your primary concert is identical results it should be
using the _same_ target, with no runtime changing based on instructions on one
processor or another.

Anyway, K5 was almost two decades ago. If you use recent processors, or even
better avoid x87 instructions entirely, are you still going to have different
float results?

~~~
Someone
How do you check that your code never switches paths depending on CPU
architecture? How do you guarantee that for the (third party) libraries you
use, including your C library?

Also, I think modern x86 has a "approximate 1/x" instruction that is there to
be as fast as possible and, because of that, allows for differences in
results. I don't know whether that is true, or whether different
implementations actually exist, though.

~~~
Dylan16807
In the end though that sounds like you can't be entirely sure that chips in
different generations are consistent, so it's not really a manufacturer
difference problem.

