
Inter-Procedural Optimization and Derefinement - adamnemecek
http://www.playingwithpointers.com/ipo-and-derefinement.html
======
legulere
It seems way more to me, that the linker shouldn't merge different
implementations of a function.

On the other hand the compiler maybe should rename the function when it does
inter-procedural optimization on it.

~~~
cwzwarich
The C/C++ standards require that two pointers to the same inline function are
equal, even across translation unit boundaries. Therefore, in the worst case
you need to pick a single implementation of the function in the linker. Of
course, you could use different implementations in translation units where the
address of the function is insignificant.

A similar issue affects the C++ template compilation model used in most
compilers. If template instantiations weren't merged at link time, C++
binaries would be incredibly bloated (and C++ binary size is already a
problem).

~~~
DannyBee
"The C/C++ standards require that two pointers to the same inline function are
equal, even across translation unit boundaries."

Sigh. I presume this is because you can pass inline functions to callbacks,
and the 6 people in the world who do

if (a == &f::foo) {do something}

want it to work no matter what?

One solution, of course, is to privatize the function you call, but still hand
any code paths that escape pointers and take the address the "other" version
(which is i presume what you meant)

------
dkarapetyan
I don't understand the example

    
    
      bool g(bool b) {
        if (b)
          return true;
        int unused = 1 / 0;
        return false;
      }
    

Shouldn't it be optimized to

    
    
      bool g(bool b) {
        return b;
      }
    

instead of

    
    
      bool g(bool b) {
        return true;
      }

~~~
nikic
Division by zero is undefined behavior, so the compiler is free to assume that
the b==false branch is unreachable.

~~~
choosername
It would also be reasonable to remove the declaration as the variable is
"unused".

------
foota
I've always wondered, is there a time when assuming no undefined behavior
occurs (and none does) allows the compiler to do an optimization it couldn't
otherwise?

