This is realistically not an issue. First, on a 64 bit machine, the range of actually mapped addresses is small relative to all the possible values that can fit into 64 bits. Second, from an attacker's perspective, the values corresponding to mapped memory are extremely difficult to predict, and the values binary-equivalent to large allocations are impossible to predict, even with access to the source.
If you think you can still do it, all of *.golang.org and golang.org are running Go on Appengine, with the source code being freely available. This is your opportunity to get a back door into Google's servers.
If you make a huge allocation (many pages), isn't the Go runtime very likely to call malloc()? For large allocations, malloc() is going to get you a bunch of fresh pages and you will generally get the address of the start of a page. The offset of the pointer within the page is then likely to be deterministic, so you probably only need one unit of pointer-equivalent data per page. If you have enabled huge pages (e.g. 2MB, not uncommon), then you have already soaked up 21 bits of the 48 bits of address space that are actually used by x86-64 implementations, leaving only 27 bits for a collision. The stack grows down from 2^46 and typical heap values on x86-64 are still well within 32 bits. Finally, a collision need not be frequent to be a serious DOS concern.
The Go runtime does not call malloc(3) for heap, it reserves address space at known high locations (over 2^32) with mmap(2) using the MAP_FIXED flag, and it does so in 16GB increments (or is it just one 16GB allocation? can't remember).
I won't comment on the DOS concern until I've investigated further.