
Golang on ARM - selvan
https://github.com/golang/go/wiki/GoArm
======
dekhn
One of the funniest experiences I had was building an ARM go binary on an
x86-64 linux box using a cross compiler (nothing new there). Then, by accident
I tried to run the ARM binary on my x86-64 box, instead of on the ARM box. It
ran like a normal executable, which confused me until I realized I had
installed QEMU-arm, which installed a linux binary format handler for ARM
binaries, and it emulated ARM to run my binary. Of course, copied over to the
Raspi, it ran fine there too.

~~~
biomcgary
Did you benchmark performance on QEMU vs bare metal?

------
IshKebab
This is one of Go's best features for me. You can trivially cross-compile
statically linked binaries, copy them over to an ARM system and they run fine.

Contrast that with C/C++ - sure you _can_ cross-compile, and I have in the
past, but it's a huge pain to set up and who ever remembers the command line
options. Those "target triples" aren't even documented anywhere. Plus, good
luck building a statically linked C++ program.

Lua might be a decent option too.

~~~
frame_perfect
What about LLVM? Go's support of cross compilation is laughable in comparison
to the different LLVM backends. And you can just use the -arch armv7 and -arch
arm64 shortcuts instead of those target triples.

Also, FWIW, the reason why Go can even be cross compiled in the first place is
because it has platform specific C and assembly code from cgo's runtime so it
can support the different architectures/platforms.

Of course, if Go was just an LLVM frontend instead of using gcc then it
wouldn't even need the platform specific C code... And it would support a
shitton more platforms like asm.js. But I guess Google just hates Apple that
much?

~~~
arthurbrown
Russ Cox wrote a great comment[1] on HN about why using the Plan9 toolchain
rather LLVM or even GCC was a competitive advantage for them in terms of speed
to delivery, control, and iteration.

"Most importantly, it has been something we understand completely and is quite
easy to adapt as needed. The standard ABIs and toolchains are for C, and the
decisions made there may be completely inappropriate for languages with actual
runtimes.

For example, no standard ABIs and toolchains supported segmented stacks; we
had to build that, so it was going to be incompatible from day one. If step
one had been "learn the GCC or LLVM toolchains well enough to add segmented
stacks", I'm not sure we'd have gotten to step two."

"... the custom toolchain is one of the key reasons we've accomplished so much
in so little time. I think the author doesn't fully appreciate all the reasons
that C tools don't work well for languages that do so much more than C. Most
other managed languages are doing JIT compilation and don't go through
standard linkers either."

[1]
[https://news.ycombinator.com/item?id=8817990](https://news.ycombinator.com/item?id=8817990)

~~~
pcwalton
> For example, no standard ABIs and toolchains supported segmented stacks; we
> had to build that, so it was going to be incompatible from day one. If step
> one had been "learn the GCC or LLVM toolchains well enough to add segmented
> stacks", I'm not sure we'd have gotten to step two."

LLVM now supports both segmented stacks [1] and precise garbage collection
with patchpoints [2].

Tweaking support for segmented stacks is quite easy—I added the support for it
on x86 Mac. You simply adjust the prolog emission code in X86FrameLowering.cpp
and likewise for the other targets.

[1]:
[http://llvm.org/docs/SegmentedStacks.html](http://llvm.org/docs/SegmentedStacks.html)

[2]:
[http://llvm.org/docs/StackMaps.html](http://llvm.org/docs/StackMaps.html)
[http://llvm.org/docs/Statepoints.html](http://llvm.org/docs/Statepoints.html)

~~~
sunnyps
Yeah, the point is that it wasn't there at that time.

------
vessenes
It's incredible to run buildall.sh on a recent go repo and see all the cross-
compilers get built with no fuss or hassle. There are at least 30 different
target platforms I can get to from my ubuntu box with a single command line.

I'm not aware of anything that even comes close to tooling out cross
compilation so easily.

~~~
bbrazil
I do nightly binaries for Prometheus in all the possible archs, while things
should compile perfectly there can be small things that prevent things from
cross-compiling.

darwin/arm, darwin/arm64, plan9/386, plan9/amd64 and solaris/amd64 don't
compile for any of the 11 binaries (e.g. plan9 doesn't work due to logrus not
having support for it).

This leaves me with the list at [http://www.robustperception.io/prometheus-
nightly-binaries/](http://www.robustperception.io/prometheus-nightly-
binaries/) That's still 20 architectures I can compile almost everything to
automatically with an ansible+jenkins setup.

~~~
4ad
What is the problem with solaris/amd64?

~~~
bbrazil
../../Sirupsen/logrus/text_formatter.go:28: undefined: IsTerminal

The logrus library we use for logging doesn't support Solaris.

~~~
threemux
Looks like it does now:

[https://github.com/Sirupsen/logrus/blob/master/terminal_sola...](https://github.com/Sirupsen/logrus/blob/master/terminal_solaris.go)

~~~
bbrazil
Good to know, thanks. We vendor our dependencies so it'll be next release
before this will work out of the box.

------
bigdubs
Maybe I'm in the minority but I would rather have resources devoted to
improving compile times in 1.5+ versions vs. having golang work on arm.

Isn't the best course usually to build a solid foundation, then branch out?

~~~
tptacek
Really? I know 1.5 is slower than previous versions, but it's still by far the
fastest compiler I've ever worked with.

~~~
bigdubs
In the global scheme of things it's still pretty fast, but builds on 1.5 take
2x the time of 1.4, it's pretty bad for existing golang users.

~~~
nickysielicki
Why is that a problem for you? With golint / gocode I never have any need for
extremely fast compilation. What's your setup like that you're rebuilding so
often?

~~~
vox_mollis
Personally, my biggest beef would be that the go developers specifically stood
against generics on account of compiler complexity => slow compiling.

It seems they accepted this sacrifice for some other unimportant feature*

*edit: Looks as though the feature they sacrificed performance for was writing Go in Go.

~~~
tptacek
Compilation in Go remains lightning fast, again, faster than any other
compiler I use; fast enough that it's sane to run the compiler every time I
save a file so my editor can highlight the errors.

~~~
Locke1689
I think there are two schools of thought:

1) Perf for a product should be marked against some shipping benchmark and
intermediate regressions don't matter as long as we come below the benchmark
at ship time.

2) Perf improvements should be "locked in" and not regressed, unless
strenuously justified.

I think the parent is of group (2), where it doesn't necessarily matter that
it's better than the benchmark of "against other compilers" but only that Go
regressed from their earlier perf metrics for what they believe to be
inadequate justification.

I think both groups have valid points. I think (2) is especially good at
preventing "perf creep" wherein you continually justify small perf regressions
for new features, explaining it away that competitors are doing the same.
Eventually, you get to the point where everyone's just slow, but they have no
impetus to improve because they're still better than the competition.

~~~
vox_mollis
Group (3), of which I am a member:

If the developers take a specific principled stance against
$really_useful_feature on account that it would introduce complexity and thus
degrade performance, then compromise this same principle in order to
accomplish $entirely_unnecessary_port, it put the obvious lie to their avowed
principles.

I don't mind that go compiles more slowly. To Thomas's point, it's still
really fast. What I mind is that a good portion of the go community has
accepted the lack of generics on the basis of principles that they haphazardly
violated in order to scratch the useless but fun intellectual itch of
bootstrapping the compiler.

Now that they've done this, they have revealed themselves to be entirely
dishonest about their complexity/performance principles, and now the one
legitimate objection that has kept generics out of the language has nothing to
stand on.

Again, I don't care about performance per se, I care about dishonesty and
hypocrisy in the platform's objectives.

~~~
tptacek
"Entirely dishonest". "Dishonesty and hypocrisy". In a discussion about
compiler performance. Do you understand that framing things like this makes it
seem like discussing things with you is intractable?

Absolutely nobody in the universe promised you generics in Go. You may never,
ever get them; the longer the language thrives without them, the less urgent
they seem.

~~~
pcwalton
The way the post you're responding to was phrased is indeed hyperbolic. But I
think there is a valid point that compiler performance concerns are a pretty
unconvincing reason to not have generics, especially in light of a
demonstrated high tolerance for compiler performance regressions.

~~~
tedunangst
Why not consult the go faq to find their reasoning for omitting generics?

> Generics are convenient but they come at a cost in complexity in the type
> system and run-time. We haven't yet found a design that gives value
> proportionate to the complexity, although we continue to think about it.

The word performance doesn't appear in that paragraph.

~~~
AnimalMuppet
Complexity in the type system can easily have a negative effect on compiler
performance.

------
Locke1689
Cool! What's the Go memory model look like? Is it stricter than ARM or could
possibly working programs be broken when run there?

~~~
4ad
[https://golang.org/ref/mem](https://golang.org/ref/mem)

Btw, in case it's not clear, Go has worked on ARM for the last 6 years, this
is not a new thing.

It is the job of the compiler and runtime writer to ensure this model on
whatever target platform. It doesn't matter if the target platform has a
strong or weak memory model. Correct Go programs written for the Go memory
model should work correctly on any target.

Of course that it's possible to write Go programs that violate the memory
model but work on amd64, and not on some other platforms. But those are
incorrect Go programs (and the race detector has a very good chance of
detecting these problems). I will note that it's significantly harder to write
these programs accidentally in Go though, than it is to write them in C.

------
vferreira
How about the support by the LLVM. Shouldn't this be enough?

~~~
kyrra
Go already has 2 compiler implementations. There is gc (Go Compiler) and
gccgo[0].

[0]
[https://golang.org/doc/install/gccgo](https://golang.org/doc/install/gccgo)

~~~
frame_perfect
But LLVM already has a lot of good backends (emscripten, arm, ppc, x86,
amd64). Why reinvent the wheel when you could just make Go a LLVM frontend and
be done with it?

~~~
bassislife
Compile times ? Flexibility over the backend ?

That would be my hunch. I think it makes a lot of sense. Also, note that there
is that project which exists : [http://llvm.org/viewvc/llvm-
project/llgo/](http://llvm.org/viewvc/llvm-project/llgo/)

~~~
pcwalton
> Compile times ?

There can be a compile time hit, but you can use FastISel, which is not bad.
The compile time hit is mitigated by the fact that you get a far, far more
mature series of optimization passes. Since the Go compiler is itself written
in Go, more optimizations would make the Go compiler itself faster, further
reducing the performance impact.

> Flexibility over the backend ?

Writing to LLVM in no way restricts you to a backend. You simply need a
language-level IR that you can retarget to different backends as you would
like. This is what Swift, for example, is doing with SIL.

~~~
bassislife
> Writing to LLVM in no way restricts you to a backend. You simply need a
> language-level IR that you can retarget to different backends as you would
> like.

It still ignores toolchain flexibility. Especially given the specific needs of
Go's managed runtime.

------
ihsw
This is for both development on ARM systems (standard go commands, eg: go
build, etc) and deploying to ARM systems (setting ARM as a compile target).
That said, I'm genuinely curious why anyone would develop on an ARM-based
system as opposed to your run of the mill desktop.

I realize that standard towers may be supplanted by their SoC brethren, but so
what? Will we all be using iMac-like computers as desktops in the next 36-72
months? Will there even be any differentiation between "regular" desktops with
discrete CPUs, memory, etc and the system-on-a-chip variant?

~~~
detaro
Not everybody compiles all binaries they deploy on developer workstations. If
you run ARM servers, it makes sense that your build servers are on ARM as
well, especially since you should run tests on the same architecture you're
deploying to.

~~~
ersoft
Well, when your build takes 1min on ARM and 3 seconds on your laptop, it makes
sense.

