EDIT: I modified the Go program to use int64 instead of int (the upper bound of the first case is not a valid 32-bit integer), and the execution time is now much higher: 2 minutes 41 seconds on my laptop.
Presumably that was the target GOARCH the OP was aiming for since int on GOARCH=amd64 is 64-bit (int mirrors the native bit-ness of the architecture in Go instead of virtually always being 32-bit like it is in C/C++).
Using int64s on a 32-bit GOARCH (which I assume is your current GOARCH if int is overflowing after 32-bits) will certainly result in a lot of slowness compared to using int64 or int (which will be the same as int64) on a 64-bit GOARCH.
Also as far as speed of execution goes, it'll likely depend greatly on the version of Go you're using. The RC releases of Go 1.1 are much, much faster than Go 1.0.3 for a lot of code.
Seriously, in a language designed ~2010 a common "int" variable could be 32-bit? 64-bit should be the minimum because 32-bit is too small to express common values these days. It's 2013, if you actually want a 32-bit integer use int32. Better yet make "int" always be a particular size and use "register_t" or "int_fast_t" similar as a native int.
You can't even assign a smaller type to a larger one in Go without a cast, but you can recompile it on a different arch and, like this program, have it completely fail because the size of the variables changed. And baked-in coroutines so that you won't run out of address space on 32-bit? Really?
Go is so archaic in many ways like this. It's like somebody from the 70s tried to come up with a better C than K&R.
I share your annoyance to some degree and think it gets even worse when you mix CGO into the equation since people writing CGO will often leave C ints as Go ints when porting structs and function signatures, which is a terrible idea (because on a 64-bit system with a 64-bit C/C++ compiler and a 64-bit Go compiler, int will be 32-bit in C but 64-bit in Go).
OTOH, having int be variable-sized allows you to easily have maps or other int-indexed structures that naturally grow in capacity with the current architecture and I'm glad they didn't just peg everything to 64-bit as a 'modern' shortcut because I do a lot of Go programming that targets 32-bit ARM CPUs and the performance would be shit if the Go runtime just assumed a 64-bit baseline.
In any case, well written Go code will generally use the specific int type that makes sense for the data at hand, int16, int32, int64, etc, and not simply int. A programmer declaring a lot of un-qualified 'int' variables (for things other than loop indexes and such) in his or her Go code is (in most cases) doing Go wrong.
"It's like somebody from the 70s tried to come up with a better C than K&R."
That's basically what Go was meant to be and I suspect the Go team would consider your description of Go to be more flattering than you perhaps meant it to be.
That's why you have a -fint-is-32-bit flag or some way to bend the standard to make it run faster on those system when needed. There's no way Go with it's shared memory between threads is going to survive any future shift to 128-bit computing anyway, so just making int always 64-bit is the best choice; if your maps or indexed structures are so vast then you'll need better abstractions than coroutines.
> [better C than K&R is] basically what Go was meant to be and I suspect the Go team would consider your description of Go to be more flattering than you perhaps meant it to be.
Well it certainly was not intended to be flattering, but I suspect you are correct.
UPDATE: After updating go from 1.0.2 to 1.1, this matches my results - it now takes about 2 minutes, 47 seconds to run through the input (and Code Jam confirms the output is correct). This seems like a significant detail to me, so I'll update the post.
More importantly, the argument that Ruby is "too slow" seems to hedge on the L1 input for Fair and Square problem being solvable with "fast" languages using same unoptimized algorithm as can be used with the S small input. If you'll look at the Codejam results this is simply false. In fact, a large percentage of Ruby users passed Fair and Square L1 problem, higher than many "fast" languages:
Language % of S input solvers successful on F&S L1 input
Ruby ---------- 29.6 (112 L1, 378 S)
C ------------- 20.4 (199 L1, 973 S)
Java ---------- 27.6 (1183 L1, 4285 S)
Go ------------ 25.4 (15 L1, 59 S)
(from http://www.go-hero.net/jam/13/languages/0 )
I think this means you need to rethink your conclusions.
In fact algorithm contests are usually designed so that the small inputs are solvable by brute force algorithms, but you need to optimize them to solve larger inputs, harder problems. It's not just people using Ruby and other "slow" languages who have to rethink their approach.
You are supposed to use int when you don't care about its size (int = native word size). If you are writing a program where it matters you should explicitly use int32 or int64.