Lots of inlining (which -O3 would cause, it always does. I often find -O2 produces faster executables)? Debugging information? Pulling in the standard library?
A compiler doesn't produce an extra 3MB of code for no good reason.
This seems too simplistic.
I often find -O2 produces faster executables)?
Anyhow, the post was really about answering the question: should you as a user of (and not developer of) Go check out GCCGo right now? From my perspective, I'm sticking with Go's compiler.
The only reason it was ever faster is because of backend related issues that should be long since solved.
I have trouble imagining that doubling the code size and complexity of the "mandel" example would double the compile time or code size with either example. Why not try a handful of programs to get a scatterplot between the two toolchains?
Why would anyone care about relative code size or compile
time for a program this small?
Why not try a handful of programs to get a scatterplot
between the two toolchains?
Go is a lovely, young language, but the GCCGo implementation is very young. The GCCGo folks will catch up soon enough with Google. Until then, Google has produced a nice toolchain for you.
I'm arguing that it's a bad microbenchmark for compile time and code size. I'm not all that familiar with go, but in most systems I've used, if you write a really tiny program, you will find that there's a "fixed cost of entry' where the standard libraries get munged through the linker (or whatever) and that writing a program 10x larger will not cost you 10x as much. Since you took only one measurement of one program, you have no idea what the trend lines might look like here.
It seems like you are going out of your way to do this badly and make a snap judgement that you can proclaim to the world.
I am surprised at the extent to which you seem to think you need to ration your investigation and avoid wasting your time by the exhausting task of "also investigating GCCGo".
Also, in the world of high-performance computing, 10% for zero programmer effort is huge.
How can we extrapolate from a single data point? We need 2 points for any kind of line.
>>Were I to have gotten a result that suggested that GCCGo was markedly better along some dimension...<<
Maybe you would have gotten that result on a different program?
As of 1.1 Go code generation is also drastic improved. I would love to see Go 1.0.3 vs Go 1.1 vs GCCGO tests with a wide sample of programs.
The compilers where : Go 1.1rc, GccGo version 4.8.0 20130502 (prerelease), on an Arch Linux 64-bit system, Core i5
Overall Go 1.1rc seems to have improved quite a bit from my previous test (1.0x) unless my memory betrays me.
Note that these are far from all the 'computer language benchmarks game' tests, only those which I managed to get done during lunch break, and as such they may skew the results compared to a full benchmark comparison.
Most to the point, the mandelbrot program shown on the benchmarks game is not the same -- someone contributed an improved program to the benchmarks game.
Last time I checked GccGo won most of them hands down and even more so when you changed the optimization from -O2 to -O3 (on my machines atleast 'i5/i7' -O2 always lost to -O3 in these tests so I don't know why they defaulted to -O2 here).
This was on 1.03 I think, I haven't checked on 1.1x as of yet and given that there's been improvements in the Go compiler the overall results may have changed considerably.
time ./mandel.golang 16000 > /dev/null
time ./mandel.gccgo 16000 > /dev/null
I say 'your' assuming that OP is the author.