Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Golang on ARM (github.com/golang)
105 points by selvan on Nov 17, 2015 | hide | past | favorite | 63 comments


One of the funniest experiences I had was building an ARM go binary on an x86-64 linux box using a cross compiler (nothing new there). Then, by accident I tried to run the ARM binary on my x86-64 box, instead of on the ARM box. It ran like a normal executable, which confused me until I realized I had installed QEMU-arm, which installed a linux binary format handler for ARM binaries, and it emulated ARM to run my binary. Of course, copied over to the Raspi, it ran fine there too.


Did you benchmark performance on QEMU vs bare metal?


This is one of Go's best features for me. You can trivially cross-compile statically linked binaries, copy them over to an ARM system and they run fine.

Contrast that with C/C++ - sure you can cross-compile, and I have in the past, but it's a huge pain to set up and who ever remembers the command line options. Those "target triples" aren't even documented anywhere. Plus, good luck building a statically linked C++ program.

Lua might be a decent option too.


What about LLVM? Go's support of cross compilation is laughable in comparison to the different LLVM backends. And you can just use the -arch armv7 and -arch arm64 shortcuts instead of those target triples.

Also, FWIW, the reason why Go can even be cross compiled in the first place is because it has platform specific C and assembly code from cgo's runtime so it can support the different architectures/platforms.

Of course, if Go was just an LLVM frontend instead of using gcc then it wouldn't even need the platform specific C code... And it would support a shitton more platforms like asm.js. But I guess Google just hates Apple that much?


Russ Cox wrote a great comment[1] on HN about why using the Plan9 toolchain rather LLVM or even GCC was a competitive advantage for them in terms of speed to delivery, control, and iteration.

"Most importantly, it has been something we understand completely and is quite easy to adapt as needed. The standard ABIs and toolchains are for C, and the decisions made there may be completely inappropriate for languages with actual runtimes.

For example, no standard ABIs and toolchains supported segmented stacks; we had to build that, so it was going to be incompatible from day one. If step one had been "learn the GCC or LLVM toolchains well enough to add segmented stacks", I'm not sure we'd have gotten to step two."

"... the custom toolchain is one of the key reasons we've accomplished so much in so little time. I think the author doesn't fully appreciate all the reasons that C tools don't work well for languages that do so much more than C. Most other managed languages are doing JIT compilation and don't go through standard linkers either."

[1] https://news.ycombinator.com/item?id=8817990


> For example, no standard ABIs and toolchains supported segmented stacks; we had to build that, so it was going to be incompatible from day one. If step one had been "learn the GCC or LLVM toolchains well enough to add segmented stacks", I'm not sure we'd have gotten to step two."

LLVM now supports both segmented stacks [1] and precise garbage collection with patchpoints [2].

Tweaking support for segmented stacks is quite easy—I added the support for it on x86 Mac. You simply adjust the prolog emission code in X86FrameLowering.cpp and likewise for the other targets.

[1]: http://llvm.org/docs/SegmentedStacks.html

[2]: http://llvm.org/docs/StackMaps.html http://llvm.org/docs/Statepoints.html


Yeah, the point is that it wasn't there at that time.


> But I guess Google just hates Apple that much?

This is silly. Google contributes a huge amount to LLVM. They're very invested in the project.

(NB: I'm not defending, and don't agree with, the choice to avoid LLVM in Go.)


> the reason why Go can even be cross compiled in the first place is because it has platform specific C and assembly code

It does not have platform-specific C code, as there is no more C code anymore. It sure does have platform-specific Go code and platform-specific assembly code though.

> from cgo's runtime so it can support the different architectures/platforms.

There is platform-specific code in the runtime, and there is platform-specific code elsewhere in the standard library, e.g. in the syscall package, but it's not from "cgo's runtime" (whatever that means). Go does not require cgo to interact with the target system, even on platforms where the interaction is done through shared libraries, not system calls, like Solaris and Windows.


> because it has platform specific C and assembly code from cgo's runtime

I'm not sure what you mean by "cgo's runtime", but go runtime is in go since 1.5. Some bits are still written in assembly but that's mainly for performance.

I don't think targeting every arch (and virtual archs like asm.js you mention [though there is a 3rd party GopherJS]) and OS out there was ever their purpose.

In case you're interested, there is llgo which has recently been merged into LLVM. It's still not practical however.

> But I guess Google just hates Apple that much?

I don't understand what this is supposed to mean since LLVM does not belong to Apple, and Google is a heavy user and contributer thereof.


> Some bits are still written in assembly but that's mainly for performance.

Many bits in math and crypto have fast assembly variants, but the assembly code in the runtime and sync packages is not for performance, but because assembly is the only way to put data in the right registers, issue memory barriers, implement the right atomic operations and so on. Not performance.


It's incredible to run buildall.sh on a recent go repo and see all the cross-compilers get built with no fuss or hassle. There are at least 30 different target platforms I can get to from my ubuntu box with a single command line.

I'm not aware of anything that even comes close to tooling out cross compilation so easily.


It's called NetBSD.

build.sh cross compiles one codebase for 57(+)[0] architectures. Even for a "native" build, the tooling is the same abstraction that builds for a completely foreign target, so it's continuously well-exercised.

[0] http://www.netbsd.org/ports/


I do nightly binaries for Prometheus in all the possible archs, while things should compile perfectly there can be small things that prevent things from cross-compiling.

darwin/arm, darwin/arm64, plan9/386, plan9/amd64 and solaris/amd64 don't compile for any of the 11 binaries (e.g. plan9 doesn't work due to logrus not having support for it).

This leaves me with the list at http://www.robustperception.io/prometheus-nightly-binaries/ That's still 20 architectures I can compile almost everything to automatically with an ansible+jenkins setup.


What is the problem with solaris/amd64?


../../Sirupsen/logrus/text_formatter.go:28: undefined: IsTerminal

The logrus library we use for logging doesn't support Solaris.



Good to know, thanks. We vendor our dependencies so it'll be next release before this will work out of the box.


Maybe I'm in the minority but I would rather have resources devoted to improving compile times in 1.5+ versions vs. having golang work on arm.

Isn't the best course usually to build a solid foundation, then branch out?


I, too, would like compile and lint times faster, ideally a lot faster.

That said, the counterpoint is that cross-compilation is not something you can easily bolt on later -- if you aren't doing, and testing it along the way, you will never be able to add it back in. It needs to be there from the beginning and stay a first-class support goal, otherwise mistakes, good intentions and laziness will all yield a product that just can't easily make the leap back to solid cross compilation with no hassle.


Many of the contributions for the ports aren't from the Go team themselves, but from external contributors.

Also, work is in progress to improve build times, although I don't know if that's an active focus, rather than just something that's being fixed as time permits. A few improvements have already gone back to the current development gate for a variety of cases.

I think it's important to remember that development can be done in parallel, just because you see work being done on an area that isn't important to you doesn't mean that there isn't work being done on the area that's important to you as well. Some of these changes take a long time.


Really? I know 1.5 is slower than previous versions, but it's still by far the fastest compiler I've ever worked with.


In the global scheme of things it's still pretty fast, but builds on 1.5 take 2x the time of 1.4, it's pretty bad for existing golang users.


Why is that a problem for you? With golint / gocode I never have any need for extremely fast compilation. What's your setup like that you're rebuilding so often?


Personally, my biggest beef would be that the go developers specifically stood against generics on account of compiler complexity => slow compiling.

It seems they accepted this sacrifice for some other unimportant feature*

*edit: Looks as though the feature they sacrificed performance for was writing Go in Go.


Compilation in Go remains lightning fast, again, faster than any other compiler I use; fast enough that it's sane to run the compiler every time I save a file so my editor can highlight the errors.


I think there are two schools of thought:

1) Perf for a product should be marked against some shipping benchmark and intermediate regressions don't matter as long as we come below the benchmark at ship time.

2) Perf improvements should be "locked in" and not regressed, unless strenuously justified.

I think the parent is of group (2), where it doesn't necessarily matter that it's better than the benchmark of "against other compilers" but only that Go regressed from their earlier perf metrics for what they believe to be inadequate justification.

I think both groups have valid points. I think (2) is especially good at preventing "perf creep" wherein you continually justify small perf regressions for new features, explaining it away that competitors are doing the same. Eventually, you get to the point where everyone's just slow, but they have no impetus to improve because they're still better than the competition.


Group (3), of which I am a member:

If the developers take a specific principled stance against $really_useful_feature on account that it would introduce complexity and thus degrade performance, then compromise this same principle in order to accomplish $entirely_unnecessary_port, it put the obvious lie to their avowed principles.

I don't mind that go compiles more slowly. To Thomas's point, it's still really fast. What I mind is that a good portion of the go community has accepted the lack of generics on the basis of principles that they haphazardly violated in order to scratch the useless but fun intellectual itch of bootstrapping the compiler.

Now that they've done this, they have revealed themselves to be entirely dishonest about their complexity/performance principles, and now the one legitimate objection that has kept generics out of the language has nothing to stand on.

Again, I don't care about performance per se, I care about dishonesty and hypocrisy in the platform's objectives.


"Entirely dishonest". "Dishonesty and hypocrisy". In a discussion about compiler performance. Do you understand that framing things like this makes it seem like discussing things with you is intractable?

Absolutely nobody in the universe promised you generics in Go. You may never, ever get them; the longer the language thrives without them, the less urgent they seem.


The way the post you're responding to was phrased is indeed hyperbolic. But I think there is a valid point that compiler performance concerns are a pretty unconvincing reason to not have generics, especially in light of a demonstrated high tolerance for compiler performance regressions.


Why not consult the go faq to find their reasoning for omitting generics?

> Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it.

The word performance doesn't appear in that paragraph.


[deleted]


[deleted]


You're right, that was really stupid of me. I'll delete it.


Complexity in the type system can easily have a negative effect on compiler performance.


Is it possible that writing the compiler in go makes it easier to add efficient implementations of these missing features?


> It seems they accepted this sacrifice for some other unimportant feature [writing Go in Go]

Writing Go in Go is a huge feature, not an unimportant feature.

Writing the runtime in Go means we write the runtime in a much safer language. We found and fixed lots of bugs just by rewriting the runtime. Everybody, every user benefits from this tremendously.

Writing the runtime in Go means you can have a precise garbage collector. Every user of Go benefits from this tremendously.

Writing the runtime in Go means the runtime implementation becomes in many respects simpler, because there are no longer complex C-Go interactions (the Plan 9-derived C compiler used a slightly different calling convention!). This again reduces bugs and makes debugging easier. Everybody wins.

Writing the runtime in Go means there's one less custom compiler to maintain (no more Plan 9-derived C compiler!). Less maintenance means more time available for more important things. Also this simplifies new ports. Again, everybody wins.

Writing the compiler in Go means you can have a compiler that's not written in a language that sucks for writing compilers. Because writing compilers is now easier and consumes less time, there's more time available for more important things. Everybody wins.

Writing the compiler in Go means you benefit from all the tooling built for Go. This includes source code transformation tools. Automatic refactoring is very easy to do in Go because you have Go parsers in the standard library. This is not merely hypothetical, it's something that's actually used. The end result is that changing the compiler is faster, again allowing more work to be done.

Writing the compiler in Go also means the compiler can eventually become an idiomatic Go program that can be understood by Go programmers instead of being a ken-idiomatic C program that can only be understood by a few people. More people can work on the compiler. More work is being done as we speak, including work that will make generated code much faster. Everybody wins.

Writing the compiler in Go means you don't depend on million of lines of C++ toolchains in order to exist. As a Go compiler writer this has simplified my life a lot. I can compile the Go distribution in under a minute. On my phone. Which is 5 years old. Last time I tried to compile LLVM it died with a compile error after 6 (six!) hours.

Writing the compiler in Go means I can have fun while writing compilers for fun, so you benefit because I write it. Everybody wins.

I've written three compiler targets by now, one in C and two in Go. I don't plan to stop because I like what I am doing, but I'm not sure I would continue to do it if it were still written in C. People benefit from this not only by having more toys to play with, but new ports mean more comprehensive testing and new bugs found. Different platforms exercise different bugs differently, bugs which are present on all platforms.

Also the actual work done on the ports stimulates further refactoring efforts that will make Go even easier to port in the future. Again everybody wins! And it's only because this "unimportant feature" that is not even visible to most users of Go, coupled with the powerful refactoring technology which makes this possible.


Golang has managed to replicate a lot of the virtuous things that happened in the early days of Smalltalk, but in a compiled language. (Has a completely different philosophy about debugging, however.)


Compiling is still sub-2000ms, for me at least. Ridiculously fast compared to other languages. I can see why it would be annoying if compiling was an essential part of your workflow... But I think that would be a mistake.


go test fully recompiles most of the (large) project i'm working on


go test -i


"-i Install packages that are dependencies of the test. Do not run the test."

Does make "go test" complete faster, throws baby out with bathwater though.


ARM is really important. It's probably the fastest growing architecture other than x86 where people use things like this.

I'd also like to see Go on MIPS64, though probably less important than ARM.


Both MIPS64, MIPS, and SPARC64 are in the works (some MIPS bits already merged in).


> Isn't the best course usually to build a solid foundation, then branch out?

Yes, and I think that is what they are doing. First build something stable. Then target a lot of platforms. Then optimize the language itself and also the build times.


Cool! What's the Go memory model look like? Is it stricter than ARM or could possibly working programs be broken when run there?


https://golang.org/ref/mem

Btw, in case it's not clear, Go has worked on ARM for the last 6 years, this is not a new thing.

It is the job of the compiler and runtime writer to ensure this model on whatever target platform. It doesn't matter if the target platform has a strong or weak memory model. Correct Go programs written for the Go memory model should work correctly on any target.

Of course that it's possible to write Go programs that violate the memory model but work on amd64, and not on some other platforms. But those are incorrect Go programs (and the race detector has a very good chance of detecting these problems). I will note that it's significantly harder to write these programs accidentally in Go though, than it is to write them in C.


How about the support by the LLVM. Shouldn't this be enough?


Go already has 2 compiler implementations. There is gc (Go Compiler) and gccgo[0].

[0] https://golang.org/doc/install/gccgo


But LLVM already has a lot of good backends (emscripten, arm, ppc, x86, amd64). Why reinvent the wheel when you could just make Go a LLVM frontend and be done with it?


Compile times ? Flexibility over the backend ?

That would be my hunch. I think it makes a lot of sense. Also, note that there is that project which exists : http://llvm.org/viewvc/llvm-project/llgo/


> Compile times ?

There can be a compile time hit, but you can use FastISel, which is not bad. The compile time hit is mitigated by the fact that you get a far, far more mature series of optimization passes. Since the Go compiler is itself written in Go, more optimizations would make the Go compiler itself faster, further reducing the performance impact.

> Flexibility over the backend ?

Writing to LLVM in no way restricts you to a backend. You simply need a language-level IR that you can retarget to different backends as you would like. This is what Swift, for example, is doing with SIL.


> Writing to LLVM in no way restricts you to a backend. You simply need a language-level IR that you can retarget to different backends as you would like.

It still ignores toolchain flexibility. Especially given the specific needs of Go's managed runtime.


[deleted]


> which LLVM now supports

Key word being "now".

I think you understood rather well.

Also have a look at the supported architectures paragraph in the documents you linked.


And go 1.4+ doesn't just need segmented stacks but also moveable stacks. I do not know if LLVM has that yet.


It's mostly because the Go compiler was developed out of the plan9 compiler code base, which is unrelated to llvm.

The other aspect is that the Go compiler is intended to be very fast. That's not an explicit design goal of llvm - using llvm would slow the compiler down although it would have other advantages.


I wrote the arm64 and sparc64 Go backends because it was easy. If I were to have retargeted LLVM, I wouldn't even have finished the first one, and I would have hated my life.

In the meantime, Go still supports more hardware ISAs than LLVM. Apparently reinventing the wheel is much cheaper than adding wheels to the old car.


This is for both development on ARM systems (standard go commands, eg: go build, etc) and deploying to ARM systems (setting ARM as a compile target). That said, I'm genuinely curious why anyone would develop on an ARM-based system as opposed to your run of the mill desktop.

I realize that standard towers may be supplanted by their SoC brethren, but so what? Will we all be using iMac-like computers as desktops in the next 36-72 months? Will there even be any differentiation between "regular" desktops with discrete CPUs, memory, etc and the system-on-a-chip variant?


Not everybody compiles all binaries they deploy on developer workstations. If you run ARM servers, it makes sense that your build servers are on ARM as well, especially since you should run tests on the same architecture you're deploying to.


Well, when your build takes 1min on ARM and 3 seconds on your laptop, it makes sense.


I run a Go program on a Raspberry Pi device, and it's actually very convenient to be able to quickly edit a Go file, and test with `go run` directly on the machine.


I also run go on RPi, and compiled either there or on a qemu layer because the support for cgo cross-compilation for ARM was lacking. This was back in 1.3; It may be better now.


cgo support seems fine, at least for my use. I've compiled and use the go-sqlite3 package without any problem.


Hey, looks like someone answered you today:

http://techcrunch.com/2015/11/17/google-and-asus-launch-the-...


It's nice to be able to run go and vim on my phone when I want to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: