Hacker News new | past | comments | ask | show | jobs | submit login
Glibc fixes the value of TWO (sourceware.org)
87 points by akavel on Feb 7, 2013 | hide | past | favorite | 28 comments



After reddit (http://www.reddit.com/r/programming/comments/17zucr/glibc_fi...), see there for some interesting comments.


To be fair, this was just added in the previous commit... http://sourceware.org/ml/glibc-cvs/2013-q1/msg00116.html


No, it was apparently some half a month earlier (http://www.sourceware.org/git/gitweb.cgi?p=glibc.git;a=commi...) - although half a month is probably not really that long too.


Why the hell didn't they put those numeric values into parens? Some of those (i.e., the MONE) can evaluate improperly once preprocessing is done. >:(


I'm having problems thinking of a scenario where this could happen with unary "-". Could you give some example where both preprocessing results are valid, but have different meaning?


So, the unary operator seems to be captured correctly ( http://ideone.com/6YkQ3J ).

Several other tests reveal some more reliable compiler behavior than I've been expecting--I need to go back and brush up on my preprocessorfoo.


It's not valid, but I think it's surprising that a simple (albeit somewhat odd) expression like x-MONE would fail to compile.


See my test up on ideone--it seems that it compiles just fine, much to my surprise.


Not very intuitive, but easy to explain if you know the compiler stages.

Basically what happens is: b-MONE -> lexer -> (b) (-) (MONE) -> preprocessor -> (b) (-) (-) (1) -> parser

And I think you expected: b-MONE -> preprocessor -> b--1 -> lexer -> (b) (--) (1) -> fail

PS. yeah, preprocessor+lexer are a bit more coupled, but that's the general idea


This is something that ANSI C (C89) fixed, introducing the idea of ‘preprocessing tokens’ used in some of the conceptual translation phases. The two ‘-’ are separate preprocessing tokens because they did not originally appear together, so they turn into separate C tokens ‘-’ ‘-’ rather than one ‘--’. (If preprocessed code needs to be represented as text again (e.g. output from cc -E) then white space between the two ensures this.)

A pre-ANSI preprocessor would generally emit ‘--’, which the compiler would naturally recognize as a decrement operator. I'm not aware of any exceptions; pre-ANSI implementations diverged in various ways, but tended to follow either K&R 1 or Reiser (John, not Hans) CPP.


I'm pretty sure the preprocessor goes first. BUT it appears to put a space in between the two -s, at least in my testing, so the result is the same.


GCC explains here:

http://gcc.gnu.org/onlinedocs/cppinternals/Token-Spacing.htm...

I can't find anything in the C spec that says this is strictly required, but that doesn't mean it's not in there.


Figures, something that looks relatively simple (the C preprocessor) ends up having a ton of complicated stuff going on underneath.


That's the reason for ps... But either way - what I meant is that it doesn't run just a text replace, it has to split the tokens first. Otherwise SOME_MONE_TEXT would get turned into SOME_-1_TEXT too. Lexer has to detect where the whole token exists on its own and replace it with equivalent syntax.


I figured reading from the reddit post this is related to IEEE special way of writing two.

But why IEEE has a special way of writing two?


It's not a special way of writing two. It was a way of getting identical bit patterns for IEEE floats on all platforms.

Until recently, most compilers would do host arithmetic and then convert it to target bit pattern. You would get within epsilon equal values, but not always the same bit pattern.

Now all these compilers use multiprecision float libraries (either MPFR or custom-written), so the bit patterns will always come out the same. As such, there is no need for the explicit bit patterns anymore, so they just use "2.0".


Looking at the diff they're now using the new hexadecimal format for representing IEEE floats, which didn't exist until recently but is a much cleaner way of writing bit-exact float values.


And why it was so important to ensure the bit pattern was the same?


Mostly for testing and the ability to reproduce the same issue/error/bug on many different platforms. Before it could be a little hard to debug if you compiled on one machine that had one interpretation of IEEE754 (most likely in when to round and what direction) for releases and the devs all had a machine that did it another way. This would mean that the values would be different and could throw off quite a few things for simulations and other areas with heavy floating point numbers.


It can also wreak havok on render farms. Apparently intel and amd floating points are different enough that you can't mix and match inside the farm.


Using the same compiler with the same instruction set/ordering running on each processor? That sounds astoundingly broken; can you find a reference?


That same compiler need not produce the same code for each target. For example, one CPU might be so slow running instruction X that it is faster to emulate it, while it could be faster on the other.

There also are floating instructions where CPUs do not compute all bits of the correct result. See http://www.zqna.net/qna/knvvzn-do-fp-operations-give-exactly....

Also, not all CPUs produce the same result. In https://blogs.oracle.com/jag/entry/transcendental_meditation, James Gosling claims:

"As far as I know, the K5 is the only x86 family CPU that did sin/cos accurately. AMD went back to being bit-for-bit compatibile with the old x87 behavior, assumably because too many applications broke"


What I meant is if your primary concert is identical results it should be using the same target, with no runtime changing based on instructions on one processor or another.

Anyway, K5 was almost two decades ago. If you use recent processors, or even better avoid x87 instructions entirely, are you still going to have different float results?


How do you check that your code never switches paths depending on CPU architecture? How do you guarantee that for the (third party) libraries you use, including your C library?

Also, I think modern x86 has a "approximate 1/x" instruction that is there to be as fast as possible and, because of that, allows for differences in results. I don't know whether that is true, or whether different implementations actually exist, though.


In the end though that sounds like you can't be entirely sure that chips in different generations are consistent, so it's not really a manufacturer difference problem.


You are thinking of the RCPSS instruction.


You can mix and match x87 deterministically if you turn off 80 bit double internals. For SSE you need some SSE2 and 3 modes to do the same.


Those constants are meant to be used in low level floating point library code. So bit patterns are critical here if you don't want these functions broken in a subtle way.

Here is the patch that introduced the bug http://www.sourceware.org/git/gitweb.cgi?p=glibc.git;a=commi...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: