Hacker News new | past | comments | ask | show | jobs | submit login
Xcode 4 ships a buggy compiler (phusion.nl)
52 points by FooBarWidget on Dec 31, 2011 | hide | past | favorite | 65 comments



First of all, all compilers are buggy because all software is buggy. Software has bugs. Deal with it.

Second of all, if you are using alloca to allocate memory on the stack you are asking for trouble. There are so many filthy corner cases to deal with it is not worth it. Figure out another way to do what you are trying to accomplish. This has nothing to do with C or LLVM. This has to do with how a Von Neumann architecture works. http://stackoverflow.com/questions/1018853/why-is-alloca-not...

Third of all, alloca() is a compiler intrinsic. The compiler may choose to implement it in any way it wants to. http://en.wikipedia.org/wiki/Intrinsic_function

Fourth of all, sending a system function with a size_t of zero is undefined behaviour. This is not a bug. Undefined behaviour means that the compiler may do anything. http://en.wikipedia.org/wiki/Undefined_behavior

In general, LLVM does the most optimal thing when it hits undefined behaviour because it assumes that the Clang static analyzer will warn the programmer.


> First of all, all compilers are buggy because all software is buggy. Software has bugs. Deal with it.

This is a compiler we're talking about. Having a "deal with it" attitude is not a good thing to have. If there's a kernel bug that causes a crash would you also say "deal with it" or would you ask the creators to fix it?

> Figure out another way to do what you are trying to accomplish.

This is part of the code for the conservative garbage collector which is supposed to scan the stack for pointers. If you know a better way to do that, by all means let's hear it.


> This is a compiler we're talking about. Having a "deal with it" attitude is not a good thing to have. If there's a kernel bug that causes a crash would you also say "deal with it" or would you ask the creators to fix it?

It's not a bug. It's undefined behaviour for a compiler intrinsic.

> This is part of the code for the conservative garbage collector which is supposed to scan the stack for pointers. If you know a better way to do that, by all means let's hear it.

Keep two heaps. A long life heap and short life heap that is essentially a stack. http://engineering.twitter.com/2011/03/building-faster-ruby-...

Lua uses a struct called lua_State for this. http://www.lua.org/source/5.2/lstate.h.html


> This is part of the code for the conservative garbage collector which is supposed to scan the stack for pointers. If you know a better way to do that, by all means let's hear it.

Why does scanning the stack for pointers need to involve alloca(0)?


alloca(0) is used as a way to get the end of the stack. There's a bunch of fallback #ifdefs in the code: on platforms that don't have alloca, it detects the end of the stack by calling a forced-non-inline function which returns the address of its sole local variable, but that assumes that the compiler supports the 'noinline' keyword. In any case, all of versions depend on highly platform-specific behavior. See:

https://github.com/FooBarWidget/rubyenterpriseedition187-330...

https://github.com/FooBarWidget/rubyenterpriseedition187-330...


He isn't claiming it is the only way, he is claiming it is the best way he knows of to do what he needs/thinks he needs.

I guess he wants the cleanest way to obtain the stack pointer. alloca(1) would work, too, but that runs the risk of producing a stack overflow.


you could of course alloca(1) which would also give you the address of the end of the stack plus one byte, but this time using functionality that's inside the specification of alloca()


That's what I did, I replaced alloca(0) with alloca(sizeof(void *)). Unfortunately llvm-gcc still emitted wrong code in other parts of the program which caused Ruby to crash in strange ways. The post is not so much a complaint about alloca(0), it was just an example of where I thought llvm-gcc is broken.


llvm-gcc is not going to be globally broken or not broken. Like all software, it has bugs, and the bugs may or may not affect you. An 'example' that is actually a bug in the program is not very convincing of generic broken-ness. Try again if you want to make that case.

Do you want __builtin_frame_address, by the way? Still target specific, but explicit.


It is not just one example. Take a look at http://llvm.org/bugs/show_bug.cgi?id=9891 which is marked as WONTFIX. Shipping an unmaintained compiler certainly screams broken to me.

__builtin_frame_address looks interesting, I'll take a look.


"Calling alloca(0) should return a pointer to the top of the stack"

Eh? My understanding is that it's undefined behavior and varies per platform and compiler. Relying on it to return a stack pointer seems like a pretty terrible idea even if it should work.


Bingo. Both returning NULL from alloc(0) and assuming that alloc() never returns NULL are correct behavior by the compiler. It's simply taking advantage of the nasal demons.

It seems that LLVM and Clang have generally been much more aggressive about taking advantage of undefined behavior, which has ended up breaking code that worked under gcc. For example, with some versions of Clang, if you try to write something like (int )0 = 0, it'll completely skip that line when generating code! Completely valid, since dereferencing NULL is undefined behavior, but not what one might expect after using a compiler that's more obedient.

There's a great post on undefined behavior in C, what it means for a compiler, and how LLVM and Clang deal with it here:

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


But where exactly is it specified that alloca(0) is undefined? None of the man pages I could find say that 'size' may not be 0.

EDIT: no idea why people vote me down for this, it is a legit question. Downvotes are for comments that don't contribute to discussion, not for comments that you disagree with.

Creating a file of 0 bytes is well-defined so why would it be such a strange idea to think that alloca(0) would also be well-defined, especially because none of the man pages mention that it's not allowed?


Whatever is not explicitly defined is undefined.

My man page says, "The alloca() macro allocates size bytes of space in the stack frame of the caller. This temporary space is automatically freed on return. alloca() returns a pointer to the beginning of the allocated space."

What does it mean to allocate zero bytes on the stack frame? What does it mean to return a pointer to the beginning of zero bytes of allocated space?

You can't safely do anything with such a pointer within the confines of defined behavior in C, therefore there are no requirements placed on its value.


On the contrary, you can do something safely with such a pointer - compare it with another pointer. If the value of 'alloca(0)' is _unspecified_, then this comparison is legal, and its result, while not specified, must be consistent. In other words, code like this should not fail the assertion:

  void *x = alloca(0);
  void *y;
  int is_x_null = (x == NULL);
  memcpy(&y, &x, sizeof(y)); // confuse the optimizer
  assert(is_x_null == (y == NULL));
Note that is_x_null can be either 0 or 1, and this value may even vary between runs. But it's not acceptable for the assert to fail here, and it sounds like it would with this bug.

If you argue that alloca(0) is 'undefined behavior', of course, all this goes out the window. Since alloca is not standardized, though, one can't really argue this - all of alloca's behavior is implementation-defined, and so if llvm-gcc really wants alloca(0) to be UB, it should document this fact as a porting concern.

Incidentally, all of this applies to malloc(0) as well. C99 defines malloc(0)'s behavior as follows:

> If the size of the space requested is zero, the behavior is implementation- defined: either a null pointer is returned, or the behavior is as if the size were some nonzero value, except that the returned pointer shall not be used to access an object.

This is actually even stricter than 'unspecified', in that it requires the implementation to document which choice it takes.


Good point that you can compare the pointer with another pointer. However, your bit about consistency only applies to results from the same call. Nothing says that alloca(0) must return the same pointer on two different calls, and indeed one would naively expect the opposite. While one could certainly claim a compiler bug if the same value compared both equal to and not equal to NULL, that's not the case in the example given; instead, alloca(0) behaves as if it returns NULL in some cases, and non-NULL in other cases, which is completely allowed.

It's true that alloca is all implementation-defined, but do you know of an implementation where the value of alloca(0) is defined? On OS X, it is not defined.

Comparing alloca(0) to malloc(0) is off-base. The return value from malloc(0) has an important requirement that alloca(0) lacks: you must be able to pass the result to free(). Therefore, the compiler can't have it be an arbitrary value, whereas replacing all calls to alloca(0) with (void *)arc4random() would be (aside from the side effect of calling arc4random()) a valid transformation.


> instead, alloca(0) behaves as if it returns NULL in some cases, and non-NULL in other cases, which is completely allowed.

Not really. I can't reproduce the optimizer bug here (on the contrary, the optimizer in the version of llvm-gcc I have installed seems to assume it is null[1]), but if alloca() is returning null and the optimizer assumes it's non-null, the value will be treated as non-null in the function but null if, say, the value is passed into a non-inline function which compares it to null.

[1] i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)


The 'consistency' I speak of is the idea that, if I assign alloca(0) to a variable, it's either NULL or non-NULL - I can't look at that same variable two different ways and get different answers. From the sounds of the article, this requirement might be violated by this bug.


I don't think so. Only one comparison is being done in the example code. I'd bet that if you e.g. printed out the value of the pointer before the if statement, the optimization would no longer kick in as it does.


"On the contrary, you can do something safely with such a pointer - compare it with another pointer. If the value of 'alloca(0)' is _unspecified_, then this comparison is legal, and its result, while not specified, must be consistent."

While you would expect a comparison to another pointer to be legal, I don't believe it has to be consistent. That's the whole point of undefined behavior. In one sense, it's like the question of whether NaN == NaN. In many implementations, that expression is not necessarily true--even though based on your pointer example logic it should be.

Referencing the OP: the real problem here is not that the NULL check got optimized out and so seemed inconsistent. That's within the tolerance of C's undefined behavior (which is designed for optimizations like this and is what makes it so hard to write truly sane C code.) The real problem is a bug with the code as written. One should never check against and undefined result. Instead there should have been an initial boolean guard against the case of alloca'ing 0 bytes. I __could__ see an argument for expecting that to be always null (in fact, that's the argument for higher level languages!) But expecting it to be a specific value in relation to the stack? That seems arbitrary at best.


> While you would expect a comparison to another pointer to be legal, I don't believe it has to be consistent. That's the whole point of undefined behavior. In one sense, it's like the question of whether NaN == NaN. In many implementations, that expression is not necessarily true--even though based on your pointer example logic it should be.

Right, this is a question of whether alloca(0) is undefined behavior, an unspecified return value, or implementation-defined behavior. And I argue that alloca's behavior is implementation-defined in the first place, so it LLVM-gcc wants alloca(0) to be UB it needs to define that :)


"the case of alloca'ing 0 bytes. I __could__ see an argument for expecting that to be always null"

??? I would expect that it always succeeds. Rationale: if, at some time and place, alloca(n) succeeds, I expect that any alloca asking for less space would succeed, too.


so alloca(-1) would succeed too, right?


No. According to the OS X and Linux man pages 'size' has type 'size_t'. It is undefined whether size_t is signed.


alloca takes a sixe_t. I think size_t is unsigned. http://stackoverflow.com/questions/1089176/is-size-t-always-... seems to agree with that (correction welcome)


Not sure I agree with that, allocating 0 bytes seems pretty well-defined to me, just like allocating a file of 0 bytes is well-defined.

That said, http://pubs.opengroup.org/onlinepubs/009695399/functions/mal... explicitly mentions that malloc(0) is implementation-defined, not undefined. It would seem strange to me if alloca(0) is supposed to be undefined instead of implementation-defined.


Files have metadata, and exist in an environment where the OS ensures that appending to a file doesn't overwrite another file. Neither of those apply to pointers in C.


Not sure why you mention overwriting. If you allocate a memory block of 0 bytes then it would seem logical to me that you can't write anything to it. NULL would also satisfy that definition, but alloca(0) is supposed to return a region of 0 bytes at the end of the stack... meaning a pointer to the end of the stack.


Everything in C operates according to the "as if" rule. In other words, the resulting program merely needs to behave as if the result was executed according to the spec. How execution happens is left entirely up to the compiler.

C also doesn't define "the stack" in any way. And of course alloca() isn't part of C at all, but on the systems where it exists, it's not very specific about just where it allocates stuff, just that it's on that nebulous "the stack".

Given the above, I believe you cannot write a conforming program that can detect the difference between alloca(0) returning a region of 0 bytes at the end of the stack and alloca(0) returning something else. Since no conforming program can detect it, the "as if" rule allows the return value to be considered to be anything at all.


The program in question is not a conforming program in the first place. It's the Ruby interpreter and it does all kinds of low-level stuff. alloca(0) is called from the garbage collector in order to detect the end of the stack so that the garbage collector can scan the stack for pointers. The code assumes that it's running on a system where there is a stack at all, which is pretty much all systems nowadays.


Of course it's not a conforming program. That's rather the point: as a non-conforming program, the compiler is allowed to apply optimizations which may behave differently from what the programmer wants it to do. That this code works on one compiler and fails on another doesn't make it a compiler bug, though. It merely means that this code relies on the compiler behaving in a certain way which isn't actually mandated.


Pointers you get from malloc() usually have metadata alongside too, you're just not supposed to read it.


On many systems, malloc(0) returns NULL. I recall one man page describing it as "a stupid answer to a stupid question" or some such.


Which, it's not remotely a stupid question: malloc gets passed a zero-length argument when it's used to allocate variable length data. That malloc(0) works and is handled by free() without crashing the program is a simplifying assumption, as is the assumption that free(NULL) won't corrupt the heap.

Any use of alloca() on the other hand seems risky. Similar arguments could be made about the semantics of jmp_bufs, which also get used to get a handle on the stack.


"the assumption that free(NULL) won't corrupt the heap."

That's not an assumption, that's how the free() function is defined to work by the language standard. It never ceases to astound me how many otherwise good C programmers think free(NULL) is an error.


... yes, that's what I'm saying.


I see. The word "assumption" confused me, as in context it would generally mean something that isn't necessarily true.


as is the assumption that free(NULL) won't corrupt the heap.

When you refer to simplifying assumptions, are you talking about assumptions made by the programmer, or by the compiler and libc? For example, the POSIX manual page[0] for free( void* ptr ) says, "If ptr is a null pointer, no action shall occur." The malloc manpage says, "If size is 0, either a null pointer or a unique pointer that can successfully passed to free() shall be returned." That sounds more like a definition than an assumption to me. What am I missing?

[0] Obtained by installing manpages-posix-dev on Ubuntu and running man 3posix free.


malloc(0) has to return a pointer you can pass to free(), so it can't be arbitrary junk. The only thing you can do with the pointer returned from alloca() is dereference it. Technically, as pointed out elsewhere, you can compare it with other pointers, but the only real requirement there is that it compare not equal to any other non-NULL pointer.


Well, alloca is not specified in any formal standard, so you won't find an official ruling on whether it's undefined, unspecified or well-defined. By itself, that state of affairs should strongly discourage its use in portable code.


At the bottom of my man page:

  BUGS
     The alloca() function is machine and compiler dependent; its use is dis-
     couraged.


I interpret that statement as it being similar to __asm__(), which is also machine and compiler dependent and discouraged in portable code. Still, sometimes you need it when writing low-level code. __asm__ doesn't blow up the way alloca in the given example does.


Incidentally, I ran into a nasty x64 __asm__ codegen bug with exactly the same compiler version that this blog post covers. It was in lockless multithreaded code, so you can imagine how much fun that was to debug. Rather than work around it, I ended up replacing all our GCC inline assembly with modern intrinsics like __sync_fetch_and_add.


The example doesn't blow up at all, it just doesn't happen to behave in exactly the same way as a different compiler.


Trivia about the nasal demons: http://catb.org/jargon/html/N/nasal-demons.html


That's amazing! First time I've come across the term. For the lazy:

"When the compiler encounters [a given undefined construct] it is legal for it to make demons fly out of your nose. Someone else followed up with a reference to “nasal demons”, which quickly became established."


As much as I hate LLVM, and as many times as I've been burned by bad code it has generated, I at least have to agree about this: alloca(0) tends to do random stuff on different systems (in fact, it often seems to be special-cased to clear the alloca area in library implementations).

However, assuming the actual compilation and test results reported in this article are true, I personally don't care what the function does: if (alloca(0) != NULL), then alloca(0) /should not/ return NULL. ;P


You lost me at "As much as I hate LLVM."

What's to hate?


I am often seen arguing against LLVM on purely philosophical grounds from my asserted position in the world of open devices and jailbreaking; specifically that it, and Clang, are now products heavily funded (nearly owned) by Apple with the goal of decreasing their reliance on a project (gcc) that was relicensed under GPLv3, an event that caused Apple to immediately retract all of their engineers from contributing code, or even merging newer versions. This opinion has been strongly forged after numerous dealings with Apple's open source release department, having to pester them over and over again to get updated versions of gcc, gdb, and WebCore for their various systems (most specifically the iPhone, for which they like to redact all open-source code).


So you are basically saying you hate Apple and the way they contribute to OSS (which IMO has been a very significant contribution benefiting many other OSS projects), not LLVM or Clang. Good to have that sorted out... :-/


"""I am often seen arguing against LLVM on purely philosophical grounds from my asserted position in the world of open devices and jailbreaking; specifically that it, and Clang, are now products heavily funded (nearly owned) by Apple with the goal of decreasing their reliance on a project (gcc) that was relicensed under GPLv3, an event that caused Apple to immediately retract all of their engineers from contributing code, or even merging newer versions"""

No, he means whats a valid, actual, technical reason to hate.

Also, not it's not just the GPLv3 transition --thought that gave Apple migrating away a huge boost--, it's also tons of technical inefficiencies in the ancient design of GCC.

As an example, you couldn't do XCode style AST-aware autocompletion with GCC without tons of hurt.


Once you invoke undefined behavior, all bets are off. If the result of alloca(0) is undefined (and reading the man page, it appears to be) then any return value is valid. Likewise, the optimizer is allowed to assume that alloca(0) never happens, because once you invoke undefined behavior, anything can happen.

C is a dangerous language. A large part of that danger is to allow compilers to produce efficient code. A major point of "undefined behavior" is to allow compilers to assume that such things can never happen, and thus avoid generating code that has to deal with them.

To take a more sane example:

    int *x = ...;
    *x = 42;
    if(x == NULL)
        foo();
A C compiler would be entirely within its rights to completely delete both the if check and the call too foo() in this code. It's illegal to dereference a NULL pointer, so the compiler may assume that that code path can never happen.

Now, you may set up your system so that 0x0 is actually a valid address that can be written to, with clever mmap tricks or whatever. You may then initialize x with NULL and run this code, expecting the dereference to work, and foo() to be called. From a naive point of view, that's what should happen. However, even though 0x0 is a valid address in this environment, dereferencing it is still undefined behavior according to the language you're using. If you manually wrote the above in assembly you're fine, but the moment you do this in C, all bets are off.

It's the same deal with alloca(0). You can't count on the return value being anything in particular, and simultaneously the compiler can assume that the return value is anything it feels like.

Edit: thought of a more pertinent example. Consider the following:

    int x = INT_MAX;
    int y = 1;
    int z = -INT_MAX - 1;
    if(x + y == z)
        foo();
    else
        bar();
Directly translating this code to assembly and running it on any modern architecture would result in foo() being called (assuming I didn't screw up my arithmetic). The addition of x + y wraps around, and with a two's complement representation this results in the most negative integer being produced. That same integer is generated more directly in z, and the two compare equal.

But! integer overflow produces undefined results in C. The above program is ill-formed, and the compiler is completely within its rights to assume that the conditional is never true, because the moment you wrote x + y you gave up any right to expect any particular value in the result.

Interestingly, the version of clang on my computer optimizes the above to always call foo(), rather than always call bar(). However, either choice would be correct.

One certainly could get used to integer overflow always wrapping around according to two's complement and expect the above code to work. Upon encountering a compiler that optimizes the above to always call bar(), one might first suspect that the compiler is broken. However, it is the code that is broken, and the compiler is correct.


A) I am pretty certain that alloca(0) is not "undefined" per C, as I'm pretty certain the C standard does not mention alloca at all. As far as the C language is concerned, alloca is a function like any other function.

(edit: To be 100% clear of the ramifications of this, even if alloca itself is implemented using horrible undefined black magic, a pedantic C compiler would not have advanced knowledge of that happening inside the function, and could not prematurely optimize it away.)

(Note: I use the term "pedantic" in this edit to describe such a C compiler, as the real-world behavior of practical systems does not conform to the view that "undefined" means "could order a kill strike on your children".)

B) Your example is kind of off, btw, as NULL and a pointer with dynamic value 0x0 do not mean the same thing: I am allowed to deference a pointer that is at the address 0x0, and the dynamic value of NULL need not be 0x0. ;P

(edit: This example was deleted by the poster I am responding to, but was an example involving comparison of a pointer value to NULL being allowed to be optimized to false if it had been previously dereferenced.)

(edit:) C) Your new example is at least internally consistent, but is still specifically relying on behavior that is undefined. The C standard does not define any undefined behavior with respect to calling the function "alloca" any more than it does calling the function "hello".

If the behavior of the function itself is undefined with respect to being passed 0, that does not affect the language's implementation of what to do when calling that function: it does not know how undefined it is.

What is actually going on here is that gcc has an optimized version of alloca that it declares as a "builtin"; this is both to make alloca itself performant (one instruction), but also to allow it to make further optimizations in the function.

These optimizations should be compliant with the C language standard, and in this case they are not. Honestly, in the real world (as opposed to pedantic standard land), that's fine, but this case is just egregiously confusing.

In fact, sufficiently confusing that I can't imagine the developers of llvm-gcc would not consider it a bug; in essence, gcc and llvm's translation layers are being layered, and they are interacting "poorly".

However, as llvm-gcc is a discontinued product, this bug will not get fixed. However, this is currently the best compiler that Apple has provided us for use on their platforms as of Xcode 4.2, which is "unfortunate".

(Note: I just say "unfortunate". It is not necessarily "horrible"; it is simply "unfortunate". There are many bugs in llvm-gcc that are not present in gcc, and it is "unfortunate" that Apple hates GPL3 sufficiently to have not only thrown a ton of money at replacing it, but have now even stopped shipping the old stalwart.)


A C compiler is not required to know what the alloca() call does, but it is allowed to do so. If your call to alloca() resolves to the one in your system's standard C library, the compiler is allowed to make use of any knowledge it may have about that call, how it works, and what its semantics are. It is then perfectly legal for it to make optimizations based on those semantics.

I know that NULL isn't necessarily 0x0. My example simply assumes that it is, which is usually the case. Note that even if NULL is not 0x0, and the address that is 0x0 is valid, it's still not legal to do * (int * )0 = 0, as (int *)0 is NULL, regardless of the actual underlying value of NULL.


"This example was deleted by the poster I am responding to, but was an example involving comparison of a pointer value to NULL being allowed to be optimized to false if it had been previously dereferenced."

What are you talking about? I didn't delete anything.


Sorry, I saw the new example appear and I thought it had appeared into the slot where the old one had been. That is my mistake for finding myself in a confusing edit/edit thread and not managing to internally track the diffs well enough.


The point is that the behavior depends on the optimization setting. Worse: as you saw on the example, the optimizer actually expects the result to be non-NULL!


Is that not the essence of undefined behavior?


You're right, but would alloca(0) be undefined behavior or simply implementation-specific behavior? I think the latter would make more sense, and if so then alloca(0)'s behavior should at least be consistent and not depend on any optimization settings.


The Clang build shipped (called "LLVM Compiler" in Xcode) is also buggy: it has issues with some floating point operations on armv6 (edit: I think; it might only be THUMB, or THUMB and armv6) builds (for the iPhone). There isn't a non-LLVM GCC build for the iPhone anymore, however, so you'll probably have to choose between this bug and the floating point one when building for iPhone.


it helps to turn thumbs off if you encounter that problem on armv6 builds


Blocks in Objective-C are also buggy. If you chain 3+ in a row, the compiler craps out

  //Ex:
  [UIView animateWithDuration:1 animations:^{...} completion:^{
    [UIView animateWithDuration:1 animations:^{...} completion:^{
      [UIView animateWithDuration:1 animations:^{...} completion:^{
        //Compiler error here
      }]
    }]
  }];


I use gcc for code gen.

The idea to use undefined behavior for optimization is really terrible. Somehow they forget this is engineering project, not in researching. Breaking existing code costs a lot of time and money.

By the way, I don't see their optimization has any impact on the real project I am working on (highly cpu intensive).


"The idea to use undefined behavior for optimization is really terrible. Somehow they forget this is engineering project, not in researching. Breaking existing code costs a lot of time and money."

Saying that using undefined behavior for optimization is really terrible is living in a pipe dream if you code in C at all. No performant C compiler does not use undefined behavior to optimize it's resultant code. In fact, GCC does a fair amount of this too.

The real value in using undefined behavior to optimize code is that it leads to a fair amount of layering of optimizations. And the real world tests so a significant improvement in program running speed because of the interaction of optimizations.

The real problem is that most C programmers expect far more defined behavior than the standard actually gives. For a long explanation and lots of examples of this, see the series of blog posts at: http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

BTW, breaking existing code happens all the time with new compiler version releases with LLVM or GCC.

A side note on GCC: back in school we actually had very simple programs that used only STL data structures (all calls being legal according to the lib specs) that would cause seg faults with certain levels of optimizations in GCC. What's worse: it didn't happen in the previous major X.Y version (i.e. the difference between 4.2 and 4.1.) But even on the breaking build it worked with -O1 but not any higher. That's inconsistent behavior in optimizations if I've ever seen it. But to be fair to the compiler: it's likely that the STL made assumptions that you technically can't make according to the C spec. Just like the OP's code. I love Ruby, but using MRI as an example of code that is a correct program is rather extreme. MRI makes tons of assumptions that often lead to noticeable bugs in the interpreter.


I tried compiling the code they have in the example with llvm-gcc and it doesn't compile and returns an error?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: