Hacker News new | comments | show | ask | jobs | submit login
What Are Your GCC Flags? (httrack.com)
174 points by fruneau 1294 days ago | hide | past | web | 114 comments | favorite

The compiler flags for Ag[1] are rather strict these days:

    -Wall -Wextra -Wformat=2 -Wno-format-nonliteral -Wshadow \
    -Wpointer-arith -Wcast-qual -Wmissing-prototypes -Wno-missing-braces \
    -std=gnu89 -D_GNU_SOURCE -O2
Note that -Wall and -Wextra do not enable all warnings. To keep backwards compatibility, -Wall is basically, "All warnings as of 1990." -Wextra covers a lot of the newer warnings, but still misses a few.

I also use scan-build[2] for static analysis and clang-format[3] to ensure a consistent style. It was frustrating when I first enabled all these options, but the warnings helped me discover bugs that had been lurking for years.

1. https://github.com/ggreer/the_silver_searcher

2. http://clang-analyzer.llvm.org/scan-build.html

3. http://clang.llvm.org/docs/ClangFormat.html

-pedantic-errors is also a very useful flag.

-Wstrict-aliasing=1/-Wsuggest-attribute= can give good suggestions during development.

Probably not useful for Ag: But I do a lot of numeric stuff so -Wfloat-equal is handy for me. For code using float this should be mandatory: -Wdouble-promotion.

gcc's -Wstrict-aliasing is extremely inaccurate. It doesn't run inside the aliasing machinery; it's just a cheap heuristic inside the C parser. I'd call it a good example of why you shouldn't strive to fix a warning just because it happens to exist.

I thought there was a better tool, but can't find the name, hmm.

I really wish for a -Weverything flags that would enable all warnings, even the stupid, useless ones. I'd then put -Wno-<stupid, useless warning> to disable those.

clang has -Weverything, I prefer to start with that and do #pragma clang diagnostic push/ignored/pop as needed and only turn off the really obnoxious ones, like -W-c++98-compat-pedantic

Upvote for the completely off-topic reason of having written one of the most amazingly useful tools I have ever touched.

This is probably an unpopular opinion, but:

is fine on your dev machine, but is a very bad idea in the official build system, notably if you do it for a library that you want people to use.

As soon as you want to compile with various compilers, systems, use cross-compilation and other funny things that makes it portable, you're going to have a bad time.

New compilers will make new warnings, old compilers will make new warnings, clang and gcc have different warnings. Different distributions with gcc compiled with different flags will make different warnings too!

And of course, I don't mention libraries (or Mingw) that will always warn because of warnings inside their headers (I'm looking at you Fribidi).

Our experience is that it is actually an excellent sanitary decision. We are using multiple builders with fixed GCC releases (for RHEL5, RHEL6 etc. flavors), and yes it means fixing additional warnings when we have to introduce a new architecture. But I can not count the number of serious issues we avoided by spotting and investigating warnings.

My experience is that is you add MSVC, ICC, GCC 3.3, clang, hardened GCC on Linux, Windows, Mac, BSD, Android, iOS, Windows Phone, OS/2, Solaris and allow PPC or MIPS and have many cases of Cross-compilation, this just does not work.

Especially when you have ASM code or complex arithmetic, and when you allow to compile for different versions of Windows...

It's a good idea to have it on your build-farm, to track bugs, but it is not OK on your shipping build.

That's fine and dandy. It's not fine when it's an open source project from three years and four compiler versions ago that you want to compile, and find -Werror on every line invocation in a Makefile.

What you build with in-house for development/test-farms is not what you should ship as default to users of your project.

find -Werror on every line invocation in a Makefile

If it's on every line and not kept in one place in $(CFLAGS), you have bigger problems.

... not to mention that if it is on every line, stripping it out should take 30 seconds with sed.

Hi Xavier

Kinda off-topic, but have you considered translating HTTrack comments and variable names to English ?

Hmmm, to be honest I wish I had the time, and more than that, the courage, to cleanup the code (and this would include major refactoring and redesign). The project is now 15 years old (D'oh!), and was not aimed to live more than six months at that time, leading to an awfully bad overall coding style.

Marco Arment, John Siracusa and Casey Liss had a lengthy discussion about this in two last episodes of ATP[0] (but mostly in #54[1] and some follow-up in #55[2]).

[0]: http://atp.fm

[1]: http://atp.fm/episodes/54-goto-fail

[2]: http://atp.fm/episodes/55-dave-who-stinks

I experienced issues with this flag being used by an open source library in autotools (specifically the AM_INIT_AUTOMAKE option in the configure.ac file). The actual C code also used -Werror, and compiled perfectly, but "./configure" wouldn't complete because of minor "this is a GNU extension" and similar errors during automake.

Not being an autotools guru this was incredibly frustrating. I had no idea why the damn thing wasn't working until I noticed "warnings are treated as errors".

Please don't use -Werror in any context if your project is open source.

I'm not sure how I come down on the issue in "the official build system" - the issues you raise are legitimate, I'm uncertain whether they dominate. I do think that it's not just fine but should be mandatory on your dev machine (and probably the dev machines of other people contributing to the project).

    I don’t like C++ exceptions. And we don’t use them where I work
And none of your code uses the STL nor third party libraries that might throw exceptions either, right [1]

[1] "Doing without" http://gcc.gnu.org/onlinedocs/libstdc++/manual/using_excepti...

AFAIK, stl (and beyond that, C++) throw exceptions especially in case of allocation failure (ie. new or push_back() failing to allocate memory), and more generally in case of programming errors (out_of_range, bad_cast etc.), and I am 100% okay with the program aborting() in such situation.

Do you use things like the range-checked `std::vector::at` or `dynamic_cast` with reference types at all then? If so, do you put a breakpoint on `abort` during development?

Most code can use -fno-rtti Real Time Type Info

run-time type information

Firefox's about:buildconfig page will tell you the compiler flags used to compile your Firefox:

  clang++ -Qunused-arguments -Qunused-arguments -Wall -Wpointer-arith -Woverloaded-virtual
  -Werror=return-type -Werror=int-to-pointer-cast -Wtype-limits -Wempty-body
  -Wsign-compare -Wno-invalid-offsetof -Wno-c++0x-extensions -Wno-extended-offsetof
  -Wno-unknown-warning-option -Wno-return-type-c-linkage -Wno-mismatched-tags
  -Wno-error=uninitialized -Wno-error=deprecated-declarations -isysroot /Developer/
  SDKs/MacOSX10.6.sdk -fno-exceptions -fno-strict-aliasing -fno-rtti -ffunction-sections
  -fdata-sections -fno-exceptions -fno-math-errno -std=gnu++0x -pthread -DNO_X11 -pipe
  -DNDEBUG -DTRIMMED -g -O3 -fno-omit-frame-pointer -Qunused-arguments

I start with:

I really don't like the stack protector. It adds a lot of space to executables, so I turn it off:

Arthur told me about:

which seems to save a lot of space. I don't know exactly what it does, but the documentation suggests it does something with debugging, however `-s` doesn't remove it so I have this here.

I often work without glibc (don't need it) but I like gcc's builtins so I have:

    -Dabort=__builtin_trap -Dmemcpy=__builtin_memcpy -Dmemset=__builtin_memset -minline-all-stringops -msse2 -ffreestanding -nostdlib -fno-builtin
which seems to do the trick. I don't think all of these are necessary on all versions of GCC but I keep running into versions that complain about something so this line keeps getting longer. On x86 I additionally use:

since it saves a lot of space and helps benchmarks.

For any benefit you'd get from -mregparm, you'd get a much better one by switching to amd64 already! Or are you having to support Windows?

X32 isn't quite everywhere yet, and the larger words add a lot of space. I try not to use them if I don't need it, but I'm definitely looking forward to X32.

I go with:

g++ -std=c++11 -O3 -fomit-frame-pointer -fwrapv

fwrapv turns off some "bad" optimizations around signed integer overflow (too likely to cause harm, too unlikely to make a significant performance difference in most cases.)

I also use a lot of asserts to verify the behaviors too costly to not rely on for what I do (low-level CPU simulation and such): linear A-Z, 8-bit char, twos-complement math, arithmetic shift right on signed types, int > 16-bits, etc.

I'm sure my views won't be popular, and I'm not encouraging anyone to follow what I do, just stating my preferences.

I have a love-hate relationship with warnings. My problem is that you end up with false positives that amount more to "how the compiler authors think you should style your code" instead of reporting legitimate issues. When combined with -Werror, it's a show-stopper for no reason.

Clang is much more naggy than GCC. For instance, I frequently switch on boolean variables. Clang doesn't even have a "-Wno-" intrinsic I can push to temporarily disable this.

But there's nothing at all illegal about switching on a boolean value. It annoys me that I need to go back and add unnecessary explicit casting in 100 places in my project to keep Clang quiet, or face real warnings being lost in a sea of false warnings every single time I build my project. I know you can do if(var) { ... } else { ... } ... I don't care. I want to use switch, and I legally am allowed to. Don't bug me about it, Clang.

It also really hates empty statements, eg while(do_something()); warns that there's nothing inside the while loop. I know, the important part is do_something() and its return value. Same for if's, for's, etc. It wants me to put the ; on its own line. Uh, no. That's not my style at all.

And at the same time ... Clang caught a few bugs that GCC overlooked.

So, my current strategy is to build WIPs with GCC at default warnings and Clang with the sledgehammer of -w; and then before any releases, build with maximum warnings on both compilers and analyze each one for legitimate issues. I also run with valgrind to catch many other types of issues, like using uninitialized variables and memory leaks.

From clang's source code:

    // switch(bool_expr) {...} is often a programmer error, e.g.
    // switch(n && mask) { ... } // Doh - should be "n & mask".
    // One can always use an if statement instead of switch(bool_expr).
I agree that clang should not warn on this without asking for warnings and even then it should be possible to disable it. Maybe it would be a good idea to file a bug report.

That being said, I can't imagine why you would ever choose to switch on a bool, and even go to the trouble to add a cast instead of just writing an if statement.

Also, my clang and g++ with "-Wall -Wextra" does not warn on an empty while statement.

Thanks for digging that up. Interesting to see their rationale. Seems like they could extend the test to not warn if it's just switch(var) only, but it might have to happen at a different eval level for that to work.

> I can't imagine why you would ever choose to switch on a bool

In my case, it's for opcode execution. They consist of various bit fields that control the behavior, some are only 1-bit wide, some are 2-bits to 5-bits wide. The implementation is a series of switch statements, one after the other, that have cases for all possible values. So by using switch() in all cases, the code looks consistent.

So this is kind of what I dislike about style choice warnings. It's easy to presume there's no valid use case, until you actually find one later on. I get that people can make mistakes, but when I know for certain that I haven't made a mistake, I don't like having to change my code anyway.

Could I force the boolean values to unsigned types even though they're only 1-bit? Yes. Could I just do if/else anyway here? Yes. But I don't want to. I'm happy with my code, and received no warnings at all when I wrote it with GCC several years ago. It's only a "problem" now that I need Clang to target OS X.

> Also, my clang and g++ with "-Wall -Wextra" does not warn on an empty while statement

Hmm, good to hear. If I get to my dev PC before the post is buried, I'll post the output I was getting.

Looks like somebody added -Wswitch-bool just now: http://lists.cs.uiuc.edu/pipermail/cfe-commits/Week-of-Mon-2...

Oh wow, I wonder if someone saw this thread, or if that was just a coincidence.

Either way, very cool! Thanks for the heads up! I can't use it now obviously, but I can add a diagnostic disable line that'll work in the future.

1. -fomit-frame-pointer is implied by O3 on most platforms now

2. "too likely to cause harm, too unlikely to make a significant performance difference in most cases."

Please define "most cases". Without this, GCC will have significant trouble being able to derive the bounds of most loops, and in turn, will not be able to vectorize, unroll, peel, split, etc.

Saying "unlikely to make a significant performance different in most cases" is probably very very wrong for most people. The last benchmarks I saw across a wide variety of apps showed the perf difference was 10% in most cases, and a lot more in others.

1. that's good to know. I'm all for shortening my cflags line since I don't squelch my Makefile rules.

2. I always get bitten when I try and generalize. I tested this in all of my software, and was not able to detect any performance difference with or without -fwrapv (that is to say, < 1% difference, too small of a difference to make any conclusions.)

I know you can create extreme edge cases where there's a huge difference, just as you can probably make up one that's slower without -fwrapv if you really wanted to.

But yeah, maybe I just don't write code that lends itself to benefiting heavily from these types of assumptions. I also tend to not really rely on signed integer overflow following twos-complement. But all the same, I will take well-defined behavior over the crazy stuff GCC can produce any day, even at the cost of a bit of performance. Of course, going all the way to -O0 is way too extreme. So a case where I see no perceptible performance impact and gain defined behavior? Win-win.

2. You must not write software very amenable.

Fun fact btw: GCC and LLVM are the only compilers I know of to assume loops can overflow at all when optimizations are on.

Compilers like XLC will actually even assume unsigned loop induction variables will not ovefrlow at O3, unless you give them special flags.


> Compilers like XLC will actually even assume unsigned loop induction variables will not ovefrlow at O3

That's not exactly fair. The C standard guarantees that unsigned variables will overflow by wrapping, so if the compiler assumes such a loop won't terminate, it is not conformant.

That's exactly the point: they cheat, because it makes code faster except in the small percent of code it breaks.

Let us all now bow our heads to the almighty SPEC gods ...

I'm curious if there is a way to rewrite your loops which use unsigned types to somehow communicate to the compiler that you will not overflow.

You'd have to use annotations or asserts. Or rely on literal whole program analysis to prove upper bounds of parameters/etc (which still may not be possible statically)

Otherwise, given as something as simple as

for (unsigned i = 0; i < N; i+=2)

You can't say it iterates N/2 times

I took a couple examples where gcc failed vectorization for unsigned indexes and tried to rewrite them in a way that didn't sacrifice 1/2 of the type's range just to satisfy the optimizer. In the first example I could just change a "<= n" to "!= n + 1", and the second example, which was based on your comment, could be solved by using pointers instead of indexing. I still wonder how many examples can't be solved without using signed types.

The results: http://ideone.com/7vidIs

Generated code http://tinyurl.com/le72k4o

In reality I'm not sure what kind of loop would have an index only fitting in an unsigned type.

On a 32-bit machine an unsigned array index means one object using more than half the address space. It's sensible to use unsigned 64-bit for file sizes, but I think it's quite odd that C programers would use it for a loop or array index. Wrong, but defined, behavior is worse than undefined behavior, you know.

On a 64-bit machine, well, it shouldn't be a problem to use signed long long.

If you change <= n to != n + 1, you have just written broken code.

Think of n == UNSIGNED_MAX. n+1 will overflow.

Right, but the same is true of signed as well.

The issue i raised is that you replaced a perfectly functioning loop with one that does not work properly (it does iterate the same number of times for all inputs)

You said "I can change <= n to != n + 1". You cannot. for n == UNSIGNED_MAX, the former loop will iterate infinitely, the latter loop will never iterate.

Both are well defined to occur. You cannot change the loop behavior and say you have the same loop :)

I never meant to imply that changing "<= n" to "!= n + 1" does not change the semantics of the unsigned loop. This was an example shown in some LLVM documentation as to why you should use signed loop variables. I was just showing that you can still use unsigned variables, have the same semantics as _the signed loop_ ,and get vectorized code.

We (as an industry) need to stop using -fomit-frame-pointer, at least by default. I'd be interested to see if there's any real-world workload (not a benchmark) where it makes even a measurable difference, let alone a significant one. The problem, of course, is that it destroys the ability to examine performance in production with tools like DTrace and the like. A one-time couple-of-percent improvement in some cases (which, again, would be surprising to see anyway) is not worth losing the ability to gain more performance improvements for the rest of the software's lifetime.

It was very beneficial on the register-starved x86, but I notice less impact on amd64.

I definitely also have a debug-mode that builds with -g and without -s -O3 -fomit-frame-pointer.

amd64 has enough registers that using one for the frame pointer isn't too bad.

On the other hand, some platforms don't need a frame pointer for debugging; if you emit correct unwind tables that's enough for the debugger to construct a backtrace. I haven't looked lately but am pretty sure amd64 ELF is one of them.

Also, modern gcc generates okay debug info for optimized programs such that it's much more likely you can read variables out of a crash in gdb.

> On the other hand, some platforms don't need a frame pointer for debugging; if you emit correct unwind tables that's enough for the debugger to construct a backtrace. I haven't looked lately but am pretty sure amd64 ELF is one of them.

Working with unwind tables is more complicated and many useful debugging tools don't do so.

It's good to have it at least in dev, but many bugs only show up in production, and performance work can only usefully be done on the shipping, optimized code.

> Clang is much more naggy than GCC. For instance, I frequently switch on boolean variables. Clang doesn't even have a "-Wno-" intrinsic I can push to temporarily disable this.

I don't know how responsive they are, but you might try filing a bug.

Why would you want to switch on a boolean statement?

EDIT: Ignore me. I see your explanation below.

The article does not mention optimization, but standard optimization levels (-O1, -O2, -O3) are not chosen optimally at all. See http://dl.acm.org/citation.cfm?id=1356080

For tuning GCC flags for a particular application, see http://opentuner.org/

" are not chosen optimally at all"

Optimally is kind of a non-sensical term for this. There is no optimality to be found here in any mathematical sense. Even COLE does not generate optimal, it finds something mildly reasonable.

However, the main problem with things like COLE, is that they mostly discover phase ordering issues or latent optimization bugs. Those are bugs and problems that should be fixed.

Most of phase ordering is decided by design and architecture, ie "we want it to work a certain way for certain reasons". To the degree it doesn't work best that way, it's usually a problem to be fixed, not a fundamental issue that COLE has discovered.

"-Winit-self" actually disables the common "var x = x" idiom which is used to silence "uninitialized variable" warnings.

Personally I consider "-Werror" stupid. Warnings are designed to help and be reviewed, but aiming at "100% warning free" code should not be an "aim". For instance, I'd rather have "unitialized warnings" than use "i = i", which you know, might actually be correct code if "i" was available in scope and you want an additional copy that you can modify (nester for loops come to mind).

I sometimes leave warnings when silencing them "uglifies" the code.

"-fvisibility=hidden" is not so easy to leave on all the time when building software build by others. What I use is mostly "-fvisibility-inlines-hidden" in C++ code, even when building OSS projects, for which I never had a problem so far (in c++ inlines _are_ expected to have hidden visibility).

I also use "-march=native -flto=jobserver" and "-fwhole-program" (when linking) when I'm targetting my own hardware.

I also noticed that gcc now supports "-Og" to build optimized programs without impact on debugging, for which I had a very long command line before. "-Og -g3" gives pretty decent performance and optimal debugging, which is ideal for beta-testing programs.

gcc supports a pretty infinite list of command line switches. Actually, if you know what you are doing, you can optimize a program so well it's pretty much impossible to beat even by hand-crafting assembly. I know I tried several times, before realizing I could move a function to a separate object and supply a different set of optimization flags tuned just for that function.

For instance, "-Ofast" is actually safe most of the time for system utilities (and most other OSS software), and gives quite a boost for programs working with floats (most image-resizing loops and the like). Very few programs actually rely on exact IEEE arithmetic. Though I never use it, since finding issues might be _very_ hard.

> "-Winit-self" actually disables the common "var x = x" idiom which is used to silence "uninitialized variable" warnings.

so now we have a flag that disables a hack in code that is used to disable a warning caused by another flag. This is about as ridiculous to my (untrained in C) mind as it is to have -Wall not in-fact turn on all warnings.

The number of warnings that GCC can generate can cover some very speculative grounds, which are sometimes legitimate code. In fact many C idioms that were considered commonplace are now "warnings" because of the subtle semantics.

Take assignment/evaluation in a condition:

  if(a = [expr])
was not so frowned upon before, because it was sort of implicit that "a" was also needed in the nested block that followed. Now it's a warning without a double parenthesis, because it's also common the typo of using = instead of ==.

The list goes on and on. In fact, the level of diagnostics that you get in C is pretty bit, and probably one of the best in class compared to any other language thanks to the maturity of the toolchain.

If you're using var x = x; then stop. The compiler is able to validate that assignment to variables are never used using data flow analysis, so always initializing has no cost once optimizations are turned on. Beside, even if it was not optimized away, until a profiler actualy shows that a variable initialization is your bottleneck, it's a waste of time and will make code refactoring harder and cuase bug down the line. Maybe not in this function, maybe not by you, but someone will add a new conditional branch somewhere where your variable won't be initialized.

As for the optimization, I just tested on my machine:

   int main(int argc, char** argv)
      int x = 0;

      switch (argc)
         case 0:
            x = 1;
         case 1:
            x = 5;
         case 2:
            x = 7;
            x = 9;

      return x;

   gcc -O3 -o opt-assign opt-assign.c

   (gdb) disassemble main
Showed that x is never assigned zero.

I think you misunderstood the point. The discussion var x = x has nothing to do about performance.

-Werror is debatable. On a controlled build environment (with fixed GCC version and fixed distribution), I am convinced this is the way to go. After careful reviews, -Wno-xxx switches can be added on demand (such as -Wno-overlength-strings, haha) to limit annoying/irrelevant messages, but experience shows that having thousands of warnings in logs lead to ignore issues (this is especially true when the codebase is huge)

Except one case (gcc 2.9x), I stopped using a fixed version of a compiler a long time ago, nowdays I only use a fixed build host when needed. -Wall -Wextra -Werror will magically make your build fail because of a new warning which wasn't caught before, or a system library update, or others.

In my mind it's way better to just build your software with -Wall/others and implicitly review all warnings before pushing changed files to a common repository (which is kind of obvious, since when you are developing you also are looking the build logs).

For instance, -Wunused-args is helpful, but for a provisional API that's going to stay in the repository for a couple of weeks the warning is useless. Some people would go on and add useless code to silence the warning, though I will just commit the code.

I never had in my career large projects with more than a handful of warnings anyway. The kind of person that would use -Werror, in reality would go perfectly fine without.

> The kind of person that would use -Werror, in reality would go perfectly fine without.

Eliminating warnings can often be a hassle for no short-term benefit, and there's a decent number of people who agree that zero warnings is useful but don't actually get around to sticking to that if the compiler doesn't force them to.

The bigger benefit is when working with people that will just ignore warnings completely if you let them.

I tend to go with:

  gcc -c -Wall -Werror -std=c99 -pedantic -O3
I use -std=c99 because I use these two features of C:

1. Mixed declarations and code, e.g.

  double x = 4.8 * 5.3;
  printf("x = %.15g\n", x);
  double y = 8.7 * x;
  printf("y = %.15g\n", y);
2. Flexible array members, e.g. for my safe string operations:

  struct str 
      long len;
      char data[];
If I were to use -ansi (same as -std=c89), instead of -std=c99, then -pedantic would give me these errors:

  error: ISO C90 forbids mixed declarations and code [-Werror=edantic]

  error: ISO C90 does not support flexible array members [-Werror=edantic]
(By the way, I have no idea why the error message omits the "p" from pedantic there. That doesn't smell right. I hope the gcc people fix that.)

I use -O3 for optimization, and I chose level 3 because that enables -finline-functions. I typically avoid macros, even for simple one-liners like this:

  /* Increment the reference count. */
  void hold(value f)
With -finline-functions enabled (via -O3), I can see that function being expanded inline, by examining the assembly output of gcc -S -O3.

Why don't you use -O2 -finline-functions then?

Good question. I go with -O3 because it does even more optimizations, but to be quite candid I really don't know what impact -fpredictive-commoning, -ftree-vectorize, and others have on my resulting machine code, if any.

I did a quick experiment, compiling one C file with -O2 -finline-functions and another with -O3, using the -S flag so I could see the assembler output.

The only difference I saw was this:

  .comm	free_list,8,8
Versus this:

  .comm	free_list,8,16
Who knows. I guess I'm really using -O3 because "3 is more than 2" -- in other words, "Ours goes to 11!" :)

predictive commoning is a loop optimization that commons cross-loop redundancies.

It is basically CSE "around loops". You can generalize it to subsume loop store motion and strength reduction, but most compilers (including GCC) don't bother.

For example, it transforms

  for (int i = 0; i < 50; i++)
    a[i+2] = a[i] + a[i+1]

  p0 = a[0]
  p1 = a[1]
  for (int i = 0; i < 50; i++) {
Eliminating a whole ton of loads and stores.

It's been a while since i looked at GCC's implementation, but it did pretty well in the past (whether you can do commoning depends on your ability to identify and group sequences, etc)

-ftree-vectorize does the obvious thing (turn on vectorization). How effective it is depends on a lot of factors.

Very surprising to not see the -std option to actually clarify which language standard is being used. Perhaps that isn't as important in C++ as it is in C.

The -ansi switch is used, and imply -std=c90 for C and -std=c++98 for C++. But an explicit -std might be better, I suppose.

Is there a way to specify C89 but use the C99 style comments (i.e. //)?

I believe that -std=gnu89 should give you C89 with that particular extension (and some others).

Agreed, see my post above about -std=c99.

The use of 'bytecode' is strange when refering to gcc and C code, perhaps 'object code', or executable would've been better. Bytecode is usually reserved for things like the JVM, or other cases where an interpreter/JIT is needed to run your code.

Couple of points:

- The GNU Compiler Collection can compile Java, as well as other languages with "bytecode".

- GCC actually has a native intermediate language, it's just not very well advertised.

In real world projects, I have never seen a GNU Compiler not called explicitly, and in fact this article is advising to be extra explicit in compiler flags, so referring to any output code, especially -o with a .o extension, as 'bytecode', is inherently incorrect, regardless of whatever "intermediate" language GCC actually uses, unless you're outputting that intermediate code.

agreed, which is why I said 'gcc and C'. Is GCC's IR still called GIMPLE?

Since it mentions the Debian hardening wiki, -D_FORTIFY_SOURCE=2 is useful too. I have mixed feelings about -Wall, sometimes it produces useful warnings, other times its just too many false positives. I prefer to turn on all warnings, and then explicitly turn off warnings I don't want to see (-Wno-pointer-sign for example), as new compiler versions may add new warnings that could turn out to be useful.

I go overkill on the debugger flags:

    -g -g3 -ggdb -gdwarf-4
Being able to debug C-macros has improved my life!

    (gdb) info macro Py_TYPE 
    Defined at Include/object.h:117
      included at Include/pytime.h:6
      included at Include/Python.h:65
      included at Parser/myreadline.c:12
    #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)

Isn't -g3 sufficient ? ("Level 3 includes extra information, such as all the macro definitions present in the program. Some debuggers support macro expansion when you use -g3."). [Note: I found -g3 to be extremely costly, but I guess everything has a cost...]

Costly in what sense? Compilation time? Binary size?

Binary size! (in my base separated .dbg files were HUGE, compared to the .so files). I don't think it has any impact on performances however (especially as the debug information is put in a separate ELF section, either in a separate file, or mapped but "cold" [ie. not live in memory as the runtime does not touch it])

    -Wl,-O1 Did you know that you also have an optimization flag for the linker ? Now you know!
Ages ago the linker on SUN used to compile templates. What is GCC doing/using this flag for?

See this LWN article: http://lwn.net/Articles/192624/

So it is not really optimizing the code - more like optimizing the symbol table?

Yes, looks like it. The default symbol lookup can be really slow in some situations, especially in large C++ projects with can contain many symbols that share a long common prefix thanks to C++ name mangling.

"Optimising the linked output" seems a bit vague but that's about as clear as the man page.

    If level is a numeric values greater than zero ld optimizes the output. This might take significantly longer and therefore probably should only be enabled for the final binary. At the moment this option only affects ELF shared library generation.

Not gcc but clang:

    -Weverything -Werror
That's how I roll.

The GCC equivalent being, I believe,

    -Wall -Wextra -Werror

GCC has no equivalent of -Weverything. -Wall and -Wextra leave lots of warnings disabled.

AFAIK Gcc doesn't have a -Weverything equivalent (-Wall and -Wextra are close, but don't enable every warning possible, -Weverything does).

You mean you people don't have all warnings turned on, with -Werror as well? Here, how about I list the ones my project doesn't use, and why:

-Waggregate-return - everything returns an aggregate in C++, and that's not necessarily a bad thing.

-Wlong-long - We use long longs.

-Wmissing-declarations - We don't need to declare every internal function before we use it.

-Wmissing-include-dirs - Sadly necessary; gcc seems to include some on its own.

-Wpadded - Everything's padded.

-Wsystem-headers - I only wish I was in a position where this could be useful :)

As K&R would say, let the machine do the dirty work. If a warning check exists, there's probably a good reason for it.

We use quite a few flags to issue warnings for our code[1]. The linked autoconf macro will detect whether or not the flags are supported on the detected compiler, and automatically add the flags to the build. It works for clang, gcc, g++, and helps us spot errors that would otherwise be hard to detect and fix.

[1] https://github.com/rescrv/HyperDex/blob/master/m4/anal_warni...

-Wall -Wextra -Wpedantic -Werror -std=c++11



Just for reference, the parent is a reference to a lyric from Psy's (the Korean artist responsible for Gangnam Style) song 'Gentleman', wherein he parodies the expression 'Westsiiiiiide' (not that this necessarily makes it any funnier).

-ansi -pedantic

Because it makes things a lot easier when writing cross platform code besides when using MSC, as that is the ugly sister.

Mine are usually some variation of the following, but I find that I can't always get away with it on projects that are not done solely by myself due to the strictness :(

    -Wall -Wextra -pedantic-errors -funroll-loops.info -Weffc++ -Wunreachable-code -fno-exceptions -03 -Werror

I recommend -Wswitch-default and -Wswitch-enum. Without these, you're stuck choosing between foregoing safety when a bug causes an enum to take an unexpected value, and foregoing safety when you add a new value to an enum that needs to be handled in many places.

Oh hey, that's pretty nice! I'm a big fan of how the functional languages I've gone further than dabbling in are very pushy about covering all cases for pattern matches (basically switch on steroids), even if sometimes the correct answer is to just throw in a catch-all.

Mostly because it forces you upfront to make sure you're considering every case, and it looks like those warnings ought to give me the same kind of nagging with C and enums.

Most of these are clang specific but I tend to do something like



-gfull # generate correct debugging symbols for dead code stripping


-Ofast # fast, aggressive optimizations (clang-specific)

-fvectorize # enable loop autovectorizer

-fdiagnostics-show-template-tree (clang: print C++ template error as a tree instead of on a single line)

-Weverything # clang specific: enable every single warning


-Wfatal-errors # die after the first error encountered





-ffast-math # enable some floating point optimizations that break IEEE754 compliance but usually work

-funroll-loops # enable loop unrolling

-fstrict-aliasing # make more aggressive assumptions about whether pointers can point to the same objects

-fatal_warnings # treat linker warnings as fatal

-flto # enable link-time optimization

-dead_strip # enable dead code stripping

-Wno-error=deprecated # like being able to put __attribute((deprecated)) in code as a note to self

-Wno-error=#warnings # same thing goes for #warnings

I generally turn everything on and then use

because that one really rubs me the wrong way.

Also, speaking of flags, I'm personally in love with -MMD. It makes dependency generation pretty painless.

  -Larry -Wall
Saw that in the perl6 build long ago :-)

Then if any functions are a bottleneck, compile them with function-specific O2 or O3.

-Os doesn't always produce smaller code than -O2

It also unfortunately breaks a lot of things and exposes more gcc bugs than the commonly used options. I wish "make the output code as small as possible" was the usual thing compilers tried to do.

I don't do GCC anymore but clang instead of, at all. Does -ansi -pedantic -fwritable-strings -Weverything -Wno-missing-noreturn -Wno-padded -Wno-nested-anon-types -Wno-switch-enum count?

-Wall -Werror -Wextra -ansi -pedantic As a student I think this are very useful flags, it keeps the code in good shape and prevents common pitfalls. Of course sometimes I have to use -g3 ;)

Oh, nice. -fvisibility=internal is what I was about to go looking for. I want to write an operating system as a static library, so I don't want its internal symbols visible.

you can learn a lot at the spec.org benchmark site (both c/C++ and java apps), which many of the heavyweights (Hitachi, Cisco, Intel) use to provide detailed looks at hardware and software tweaks e.g. http://spec.org/cpu2006/results/res2013q4/cpu2006-20130923-2...

If you have -Wall, -Winit-self doesn't do anything, just tested with gcc 4.2 on redhat 6, you still have the uninitialized warning

whatever they were to build python[1].

[1] https://github.com/joe-jordan/complicated_build

option -O2 -Wall -std=XXX is very helpful for me


here's mine:

-u -r -s -t -u -p -i -d

That's a very easy question. It's usually made of green, white, red and black. Details here: http://ncusar.org/email_graphics/images/gcc-flags.gif

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact