Personally, I think the Go community is a little unhealthily obsessed with this particular metric.
I've been on teams that are not obsessed with this metric and what very often happens is that every patch landed makes the system just a tiny bit slower. The next thing you know, your tools feel glacial and users are miserable.
And, in fact, at work right now my team is trying to feverishly claw back some performance we lost over the past few months without really noticing.
If you're working on a system where performance is a key part of the UX, you really do have to be vigilant about it.
Let's not forget that Go was invented to solve Google's problems, one of which is it took hours to compile their C,C++ codes. Compile time of Go 1.7, despite the improvement, is still 2x that of Go 1.4.
What I find positive about Go is making younger generations that only know C and C++ rediscover the compile times we had in the native compilation during the 90's.
Really, most compiled languages except C++ are fast to compile. In the other thread I pointed out that Java compiles very fast, but nobody in the Go community likes to talk about that, it seems. Actually Java does have multi-tiered compilers, so the frontend is very fast as it doesn't optimise, and then the JIT compiler has a fast-but-low-quality compiler and a slower-but-higher-quality compiler, meaning you get the developer benefits of instant turnaround but the code still gets heavily optimised to GCC/LLVM quality in the hot spots.
I disagree that "most compiled languages... are fast to compile." Scala is relatively slow to compile (I've seen Spark take over an hour to build). OCaml and SML are none too speedy to compile. Rust's compiler is rather slow (they're going to focus on speeding it up, I have heard.) It seems like most compiled languages that people actually use are slow to compile.
Go is a big exception due to its conscious decision to focus on fast compile times. This choice wasn't free-- it involved carefully considering tradeoffs. For example, Go compiles source code files unit-by-unit, forgoing global optimizations in the name of time. The grammar of the language is simpler, in order to enable more efficient parsing. Features like operator overloading are not present. It is trivial to tell what is a function call what is not-- there is no "syntactic sugar" like Ruby's function calls without parens.
By "most compiled languages" I guess I was thinking of the most popular ones, not literally all compiled languages. Sloppy phrasing, sorry. The top compiled languages according to TIOBE are Java, C, C++, C# (actually these are the top languages in general). Then they're all scripting languages until position 11 which is Delphi. So those are the top 5 and except for C/C++ they compile fast.
OCaml, SML, Haskell, Rust etc are all rather rare languages.
TIOBE is a poor measure of programming language popularity for a lot of reasons. But even if we accept that Java, C, C++, and C# are the top languages, I already commented that Java's build is not that fast in the real world (except for very small projects). I don't have any first-hand experience with C#, but I've never heard anyone claim fast compilation is an advantage of that language. C and C++ are slow builders, as we already know. So the top compiled languages are rather slow compilers, which is one reason why scripting languages (I assume you mean non-ahead-of-time-compiled-languages) became popular in the 2000s. This is one part of what Rob Pike was talking about when he said "poorly designed static type systems drive people to dynamic typing."
Gradle/Maven don't do incremental compilation at the level of the file, so yes, they're gonna redo the whole thing each time. Use an IDE for that. IntelliJ will happily do incremental compilation in Java projects at the level of the individual file (and there's no relinking overhead).
I understand that you're saying the top languages are slow builders, but it's just not the case. It sounds like you've used tools which simply aren't optimised for build times and judging the language based on that. But I usually have roundtrip times between editing my code and running the recompiled program of about a second when working with Java.
Prior to using Java, I used C++ for 10 years. Every project I worked on had compile time problems. It was an open secret that we spent most of our time waiting for compiles.
It's very common for people working on small projects to feel good about their compile times. But sometimes projects grow up, and they outgrow the language and the tools they were designed in.
And yet, it is completely consistent with pretty much all other measurements of programming language popularity (github, job postings, StackOverflow answers, etc...).
So maybe it's not that poor a measure after all.
The guy that created the first C++ native compiler and one of the D authors?
No, it was invented to solve the problem that a handful of Google engineers experienced while writing C++ at Google.
Most of Google's code base is written in Java and as such, doesn't suffer from slow compilations.
Obviously nowhere near the pain of say C++, certainly something that was missed and will be nice to get back!
To be hypocritical, there must be a difference between words and actions; it looks to me like "we're really concerned about compile times" has been backed up by actions, even if there was a regression. I don't recall them ever promising that there would never be a compile time regression because it was their #1 concern; what I recall them promising as their #1 concern was almost-but-not-quite total backwards compatibility, which they've maintained.
Not at all - it was one step back while getting on the right path, then speeding up again.
1. Find slow thing.
2. Make faster.
3. See benchmark improve.
Some of the most satisfying coding I've done has been optimization.
It's also taught me just how much we can guess incorrectly the first time. It's a skill you can get better at. "Profile first" is a good heuristic, but "guess first, profile second, _then_ do the work" means you train yourself too.
I'm still very much a noob at this.
- smaller binaries
In 2000 I worked on a project where "make clean all" would take 1 hour per platform/build type.
By the way, here is a nice video of Q&A session with Alexandresku, Matsakis, Stroustrup and Pike (D, Rust, C++ and Go creators), where they also explain what does 'systems language' mean.
(bookmarked at the start of the talk about system programming)
With the releases subsequent to 1.5 the cleanup and enhancement process of the Go version started and now it shows nice results.
I think its one really great feature of Go, that the whole build/runtime infrastructure is written in Go itself. So, everyone who is fluent in Go can easily read and possible enhance the infrastructure.
Right, and also fast compilation.
I remember reading the goal was ~20% slower than C for overall performance.
I don't know anyone who has written more than a toy OS in Go, but it does not seem a fundamentally impossible task. Getting started is harder, because the earliest parts of the kernel would be modifications to the runtime package. Once you get that out of the way, writing drivers could be quite fun.
But to the broader topic, I don't have a good definition of "systems language" so I'm not going to claim Go is one.
(I work on Go.)
I have been using it in my binary size measurements as I work on http://golang.org/issue/6853, and it has shrunk reasonably well since the 1.6 release. More to come!
An OS has to deal with often strange calling conventions (interrupt routines, OS entry point traps, multiple language support). Can Go express the calling stack in any way?
Drivers are often interested in controlling timing and latency down to the instruction. Can generated Go code be examined by the developer for instruction counts?
A good kernel language is pretty low-level. Used to write them all in assembler.
I have used it in sticky situations, like the trampoline for a signal handler, and writing functions that have to conform to a C ABI because they are called by the OS. (In particular, a loader .init function.)
To minimize how much assembly a theoretical OS author has to write, there are a couple of extra tricks in the runtime package they could borrow, like //go:noescape annotations. You have to be careful when writing code like that, but it does the job.
Before the Go runtime was rewritten in Go, I worried that for a task like writing a kernel there would be a lot of assembly. But I have been pleasantly surprised by how little is needed in the runtime, and that gives me some hope for a project like that.
Of course this is speculation on my part, I could be wrong.
As most things, it's a spectrum and not binary. Userland is also part of an operating system and could be written in Go without much trouble. One could also argue that things like Docker are also systems programming.
Go is less 'systems programming' than C, C++, or Rust, because it has a garbage collector. Go is more 'systems programming' than Java, C#, node.js or Ruby, because it compiles to native binaries, provides value types (C# does too), integrates more tightly in standard UNIX APIs, etc.
This way, no garbage collector nor any bloat gets linked into your program ... but you still benefit from RAII, templates, array slices, and simplified declaration syntax.
You have OSes written in Lisp, Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Active Oberon, Component Pascal, Sing#, System C#,...
All systems programming languages which do have a GC for heap management.
Given that the Go compiler and runtime are written in Go, with the necessary Assembly glue code, I would say Go is no less systems programming language than those referred above.
Even ANSI C, cannot be fully implemented without Assembly or compiler specific extensions for hardware integration.
But of course the naysayers will only change their discourse the day someone decides to invest their time and produce a bare metal runtime for Go, following the approach of one of those OSes that I mentioned.
If the system includes the task of its own maintenance (and therefore the developers who work on it), go is faster than C because of maintenance and correctness advantages. (that definition of system is more like 'systems biology' than 'close to the iron software that calls OS APIs').
If golang stays popular the compiler will keep getting faster. Especially because now that it's self-hosting, future improvements will improve compile & runtime perf.
I like Go a lot, and I don't fret about binary size. But let's not be too charitable :). The CLI apps I write usually start trending towards 10MB with a few package imports. Their equivalents in C are usually less than 100k. One can imagine getting to better than a 100x difference.
> I like Go a lot, and I don't fret about binary size. But let's not be too charitable :). The CLI apps I write usually start trending towards 10MB with a few package imports. Their equivalents in C are usually less than 100k. One can imagine getting to better than a 100x difference.
Who want smaller binaries can use this compressor tool: http://upx.sourceforge.net/ It may be a bit slow but you just need to use it in the distributed version.
"Why move the compiler to Go?
Not for validation; we have more pragmatic motives:
Go is easier to write (correctly) than C.
Go is easier to debug than C (even absent a debugger).
Go is the only language you'd need to know; encourages contributions.
Go has better modularity, tooling, testing, profiling, ...
Go makes parallel execution trivial.
Already seeing benefits, and it's early yet.
Design document: golang.org/s/go13compiler"
"Why translate it, not write it from scratch?
Write a custom translator from C to Go.
Run the translator, iterate until success.
Measure success by bit-identical output.
Clean up the code by hand and by machine.
Turn it from C-in-Go to idiomatic Go (still happening)."
That practical point aside, I do think it's good to lower the barrier of contributing.
Then you can get this base and start improving on it
For example, using external linking should "just work" with recent versions of sqlite3 but it fails on Windows.
In my experience, certain aspects increase risks when using go:
1. go is buggier for Windows than Linux and FreeBSD, even though they are all "first class" platforms in go.
2. cgo is also buggier for Windows than Linux and FreeBSD. cgo is not go, as Rob Pike said, so this is listed separately.
3. 386 was buggier than x64, at least in go 1.2 to 1.3 on Windows. I wish I could ignore 386 and XP but I cannot.
So if a project includes all of the above aspects, well...it wasn't a good outcome for me, so I ended up sticking with C and C++ on Windows for non-hobby software.
However, I've been very happy with go 1.4.3 + cgo + x64 on FreeBSD 10 and will probably update to go 1.6 or 1.7 within a year.
Regarding go on Windows, I'm hoping to see go-sqlite3 issue #272 resolved:
And go issue #14397 cmd/link, runtime: panic "invalid spdelta" with -ldflags="-linkmode internal"
And go issue #10776 cmd/link: windows cgo executables missing some DWARF information
And go issue #12516 runtime: throw in linked c++ library causes app crash even if caught, on windows platform
And so on...
I really love using go (with cgo) on non-Windows platforms, but I can't see this combo being useful on Windows for mission critical projects for a while. There's so much more to go than the language itself and switching gears to use other languages that don't offer that is painful.
Maybe LXSS.sys will eventually make this irrelevant -- at least for companies that migrate from older versions to Windows 10 or to a non-Windows OS.