

Optimizing C++ Code: Dead Code Elimination - AndreyKarpov
http://blogs.msdn.com/b/vcblog/archive/2013/08/09/optimizing-c-code-dead-code-elimination.aspx

======
sharth
Personally, I'd expect a compiler to take it one step further than what was
suggested here.

For example, clang or gcc (under -O1) will precompute all of the math in that
for-loop, and the entire function just becomes the printf with a known value.

    
    
        define i32 @main() nounwind uwtable ssp {
          %1 = tail call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([6 x i8]* @.str, i64 0, i64 0), i64 500000000500000000) nounwind
          ret i32 0
        }

~~~
CurtHagenlocher
If you read all the way into the comments, you'll see that this does happen --
but only for values of n that are quite small. And this is a side effect of
other optimizations -- not because of anything specific to this closed form.

~~~
DannyBee
Yes, which is kinda sad. both GCC and LLVM do it as a result of computing
scalar evolutions for the variables, and then instantiating the final values
after the loop.

There are certain calculations both will bail on due to solving cost (rather
than inability to solve), but otherwise ...

------
octo_t
Dead Code Elimination can even optimise out blocking loops, a loop like
for(;;) can be optimised out.

~~~
knz42
I don't believe that's right (see below why). Point me to one compiler which
does this _intentionally_ (ie not a bug) and I would stand corrected, but in
most cases most compiler experts I know (myself included) would frown and
disapprove. There was a bug in GCC a while ago to that effect, but it's been
corrected since.

Why: Computationally, there is no difference between "no code" and "a loop
that iterates 100 times and does nothing". It is thus correct to eliminate the
latter to preserve the observable behaviour. However, a loop that _never
terminates_ is a very different thing entirely: it says "execution will never
progress past this point" and everything afterwards will never be reached.
This property is so enshrined in theoretical computer science that any half-
decent verification tool will consider what comes after the loop as dead code
and not check it. In the newer GCC and Clang the code _after_ a never-ending
loop will be marked "unreachable" and be killed as dead code.

So removing a never-ending loop would be quite the big deal indeed.

Technical note: specifically in C and C++ and every language implemented using
them, a loop with a counter but no check on the upper bound (eg. "for(i = 0; ;
++i)" ) is very different from the empty loop discussed above. Such a loop
would be called "undefined behavior": the program may or may not stop, the
world may explode, etc. This is because C and C++ say that overflow has
undefined behavior. So this kind of loop does not mean "the programmer intends
the control flow to terminate here", but rather "the programmer is playing
with fire and it can burn".

~~~
nhaehnle
What you write is perfectly plausible, and yet you're wrong. Such is the magic
of the C++ standard ;)

There is a part of the C++ standard that says something along the lines of:
the compiler may assume that every program will eventually exhibit some side-
effect. That part of the standard also explicitly states that this is intended
to allow compilers to remove loops with empty bodies (or bodies with only dead
code) even if they cannot prove termination of the loop. See e.g. page 14 of
[http://isocpp.org/files/papers/N3690.pdf](http://isocpp.org/files/papers/N3690.pdf)

This is certainly useful when you think of code that is created indirectly via
macros and/or template instantiation. Whether compilers actually perform this
"optimization" on explicit endless loops like for(;;); is another question,
but at least C++ can claim to be able to run certain endless loops in under a
second ;)

~~~
knz42
Wait a second. The document you link to is a working draft for the next ISO
C++ standard, which is by no means yet finalized (planned for 2014 or 2017?).
The 2011 standard does not contain any wording to the same effect. Again I
would be surprised if this specific paragraph makes it to the final version.

Then the initial discussion above was about C, and the ISO C standard does not
contains words to that effect at all.

------
eliasmacpherson
Got stung by this testing out of idle curiousity whether strncmp or memcmp was
faster the other day.

~~~
deletes
So memcmp was faster, right?

~~~
eliasmacpherson
Didn't finish the job, realised code was optimised out by O3 and left it at
that. Unoptimised strncmp was beating unoptimised memcmp, it was taking 1.900s
to finish where memcmp was taking 2.178s over many iterations.

------
aa0
MSVC, barely supports C99 but has time to have devs make posts about the
obvious nature of optimization. Sigh.

~~~
gecko
This gets repeated a lot, but I'll say it again anyway:

MSVC is a C++ compiler. It happens to support some C99 (and will support more
in VS2013, actually), but that's not its focus. While I, and many other
Windows developers, feel that its standard compliance has fallen a bit behind
in the last five years or so, Microsoft has made tremendous strides towards
fixing that problem lately, and the roadmap
([http://blogs.msdn.com/b/somasegar/archive/2013/06/28/1042993...](http://blogs.msdn.com/b/somasegar/archive/2013/06/28/10429934.aspx?Redirected=true))
for full C++11/14 compliance looks good, and seems reasonable. Further, while
C++11/14 compliance in MSVC isn't amazing, its compilation speed, and the
optimizations it can deliver to its compiled executables, are both top-notch.
VC++ generally seems to land between GCC and ICC for most code I write.

Is it perfect? No. Should it support more of C++11 and 14? Yes, and they're
working on it. But as much as I wish there were a great C99 compiler on
Windows, Microsoft has been open for a very, very long time that that's simply
not their focus. Just as I don't assail Apple for Xcode's lousy Python
support, I don't see a point in attacking MSVC for not doing C support.

~~~
tveita
It would help if they would just declare it unmaintained. Maybe projects would
give up on sticking to a 20 year old C standard just to maintain compatibility
with MSVC.

~~~
rossy
I wish these projects would just switch to MinGW. VLC and FFmpeg already use
MinGW to target Windows. It's fast, stable and it lets them use nice C99
features like designated initalizers.

~~~
gecko
Designated initializers are (thankfully) among the subset of C99 that is
actually getting added in the next MSVC release.

~~~
rossy
Yeah, VS2013 looks like a step in the right direction. I probably won't use it
myself since I prefer FOSS development tools, but it will be a huge win for
interoperability. I hope they implement enough C99 for big projects like
FFmpeg to work.

