Yeah, static linking could explain the code size difference. On my machine where I have gccgo 4.8 built and installed the Gccgo produced executable is way smaller than go 1.1 rc1 built. Go builds static executables whereas installed gccgo with just -O3 builds a dynamic one.
'strip' will remove symbols - debug or unused, but that doesn't change the way static libraries work - all the code, instead of being in the shared library dependency is stuffed into the executable. That's still going to be there.
Definitely. I intended it as a reasonably quick comparison of Go and GCCGo, to see if GCCGo warranted further investigation right now. I really like the GCC toolchain so look forward to it supporting Go fully, but, as someone else noted, the Go compiler seems "fast enough".
I often find -O2 produces faster executables)?
Agreed. Probably should have used O2 or Os. I just recompiled with them and didn't see any significant difference.
Anyhow, the post was really about answering the question: should you as a user of (and not developer of) Go check out GCCGo right now? From my perspective, I'm sticking with Go's compiler.
Why would anyone care about relative code size or compile time for a program this small? This is obviously a "cost of entry" issue.
I have trouble imagining that doubling the code size and complexity of the "mandel" example would double the compile time or code size with either example. Why not try a handful of programs to get a scatterplot between the two toolchains?
Why would anyone care about relative code size or compile
time for a program this small?
Because it can be extrapolated [obviously, not perfectly]... The nice thing about a mandelbrot benchmark is that it's FP and optimization driven, so does a decent quick-and-dirty job of illuminating the parallelizable performances produced by different compilers. Once you add in regexes, databases, etc, you move beyond the compiler and on to the performance of secondary or tertiary systems.
Why not try a handful of programs to get a scatterplot
between the two toolchains?
Because relative to the question I was trying to answer ("as someone investigating Go, should I also investigate GCCGo?"), that would be a waste of time. Were I to have gotten a result that suggested that GCCGo was markedly better along some dimension, I would have done more testing to quantify "markedly".
Go is a lovely, young language, but the GCCGo implementation is very young. The GCCGo folks will catch up soon enough with Google. Until then, Google has produced a nice toolchain for you.
I'm not arguing that it's a bad microbenchmark for FP run-time performance.
I'm arguing that it's a bad microbenchmark for compile time and code size. I'm not all that familiar with go, but in most systems I've used, if you write a really tiny program, you will find that there's a "fixed cost of entry' where the standard libraries get munged through the linker (or whatever) and that writing a program 10x larger will not cost you 10x as much. Since you took only one measurement of one program, you have no idea what the trend lines might look like here.
It seems like you are going out of your way to do this badly and make a snap judgement that you can proclaim to the world.
I am surprised at the extent to which you seem to think you need to ration your investigation and avoid wasting your time by the exhausting task of "also investigating GCCGo".
Also, in the world of high-performance computing, 10% for zero programmer effort is huge.
Not only that, but I have heard people say that for compute (code generation) bound tests that are single threaded GCCGO does better; however, if you run code with many goroutines (which most Go programs have) the standard Go compiler is faster.
As of 1.1 Go code generation is also drastic improved. I would love to see Go 1.0.3 vs Go 1.1 vs GCCGO tests with a wide sample of programs.
Overall I'd say Go did better as in the instances GccGo won (although more numerous) it was typically with a small margin while on those GccGo lost we had larger margins.
Overall Go 1.1rc seems to have improved quite a bit from my previous test (1.0x) unless my memory betrays me.
Note that these are far from all the 'computer language benchmarks game' tests, only those which I managed to get done during lunch break, and as such they may skew the results compared to a full benchmark comparison.
Good to know, as such it would be a more apt comparison to use the currently best Go versions from the benchmarks game as they better reflect where Go stands performance-wise (with the usual micro benchmark disclaimer).
There is a benchmark test which includes the 'computer language benchmarks game' (aka language shootout) programs in the Go source tree under go/test/bench/shootout
Last time I checked GccGo won most of them hands down and even more so when you changed the optimization from -O2 to -O3 (on my machines atleast 'i5/i7' -O2 always lost to -O3 in these tests so I don't know why they defaulted to -O2 here).
This was on 1.03 I think, I haven't checked on 1.1x as of yet and given that there's been improvements in the Go compiler the overall results may have changed considerably.
IMHO, both compile fast enough. Either gccgo or gc, it's acceptable, and is much faster than compiling a similar-size C program. What matters would be the code they produce. Compiler optimization is what makes the difference.