Hacker News new | past | comments | ask | show | jobs | submit login
Go at Google: Language Design in the Service of Software Engineering (golang.org)
168 points by enneff on Dec 4, 2012 | hide | past | web | favorite | 119 comments

I think the "return an error code" approach is just as unreadable as nested or multiple try/catch blocks when the number of failure states grows large. I think the "Maybe Monad" approach of functional languages and Scala, combined with pattern matching in the language tends to look cleaner, e.g.

some_expression match { case Some(x) => do_something_with(x) case None => no result (e.g. null in C/Java/etc) case Error(x) => handle error x }

One of the nice things about the monad approach is they can be combined together. Consider the following chain of function calls which may return null at any point:

int val = foo().bar().baz().blah();

To deal with this with if/else statements, you need 3 nested if-statement for a single line of expression. In a language like Scala, you can just write:

for (a <- foo(); b <- a.bar(); c <- b.baz()) c.blah();

I think there is an inherent tension between explicitly detailing everything and boilerplate in terms of readability and writeability. I haven't seen enough Go code or written much of anything to say anything about it, except that it worries me. I do think the multi-return stuff can limit the amount of accident ignoring of error conditions, but the Go language itself doesn't provide any high level syntactic constructs to make dealing with the errors less painful, whereas with the Monad approach (which essentially is multi-return), the pattern matching tends to look cleaner IMHO.

> "Maybe Monad" approach of functional languages and Scala, combined with pattern matching in the language tends to look cleaner, e.g.

It looks the same to me.

    case maybeValue of
      Just value -> ...
      Nothing    -> ...
isn't any different from:

    if (val != null) {
    } else {

> Consider the following chain of function calls which may return null at any point:

    int val = foo().bar().baz().blah();
Like val = foo.bar.blah.boo rescue nil


    try {
        val = foo.bar().blah().boo()
    } catch(NullPointerException ex) {

        val = foo.bar().blah().boo()
    except AttributeError:

Personally, I do see Maybe/Option as cleaner, but it isn't novel as people make it out to be.

It looks the same to me.

The key point is that because Maybe is a monad, you can chain things together and handle failure at the end much like a try/catch:

    let result = do
        x <- f(t)
        y <- g(x)
        z <- h(y)
    case result of ...
You are correct that this isn't particularly novel.

And as Go lacks both exceptions and Maybe/Option type, chaining or computation expressions will look more clumsy. I am still a Go beginner and am wondering if idiomatic Go designs programs in a different way to make up for this.

The comparison is against Go, not against languages with exceptions. There are other languages with the Elvis operator (.?) which can safely dereference nulls as well. Go has eschewed exceptions in favor of checked multiple returns, and I think this is more boilerplate ladden and less readable than alternatives.

Exceptions to handle null deferencing have other issues. NullPointerExceptions are not checked exceptions in most languages, and therefore, you do not see people surrounding chained method calls with potentially nullable intermediate results with try/catch blocks, it's exceedingly rare.

Some languages have nullable types which can be checked by the compiler, and indeed, even Java has adopted @Nullable/@NotNull, but the late adoption of this in Java, and non-existence of it in Javascript/Perl/etc all mean that for the most part, chained method calls, which a lot of people have adopted for fluent APIs/'DSL's tend to go unchecked, and any exceptions simply bubble to the top of the program.

In this regard, Rob Pike is right, and a null in the middle of a a().b().c() call likely represents a null that the programmer should have handled, not as an exceptional condition, but as a recoverable one (e.g. findCustomer() didn't find the customer).

In many cases, returning null I think is the wrong design anyway (I see lots of Java code where a search() returns null instead of EMPTY_LIST if it finds nothing), but null or false as a catch-all error code just seems entrenched.

That's why I like the Maybe Monad approach, because Maybe(boolean), Maybe(number), Maybe(Customer) are different types, compared to using integers and booleans as arbitrary error codes.

> NullPointerExceptions are not checked exceptions in most languages, and therefore, you do not see people surrounding chained method calls with potentially nullable intermediate results with try/catch blocks, it's exceedingly rare.

I won't want checked NullPointerException. That will be so common that people will end up having a "throws NullPointerException" at the top defeating the whole purpose of having it. For many cases, exceptions enforce a cleaner flow.

    Connection con = DriverManager.getConnection(...)
If I am trying to obtain a connection, the interface is "returns a connection" or "throws an exception".

> In many cases, returning null I think is the wrong design anyway (I see lots of Java code where a search() returns null instead of EMPTY_LIST if it finds nothing),

In Ruby, seq.select {|x| x.some_pred? }.map {|x| x.some_attr }.sort {|a, b| a.some_attr <=> b.some_attr } works with [] because of high level Enumerable interface implemented by [].

A Java Person.findAll() returning an empty array won't be very useful as you won't be able to chain.

The first two examples you gave are very different. The compiler ensures that you don't forget to check the null case when using the Maybe monad, while you can forget to handle the null value and get a NullPointerException later.

Also, the semantics of catching a NullPointerException in a chain of method calls are different from chaining Maybe results together. If any of the method executions inside raise an unhandled NPE, you'll catch it outside, which is probably not what you wanted to do.

Agreed on both points. I was mostly pointing out that Maybe functionality will need some work from developer side, but can be easily done if you want it that way.

> The compiler ensures that you don't forget to check the null case when using the Maybe monad, while you can forget to handle the null value and get a NullPointerException later.

Though I use mostly dynamic languages with "everything an object"(or close approximation) where everything is effectively a Maybe, I do see the value in enforcing checks before using Maybe. In practice, I try to minimize the nullable types, as it is (too easy to get lazy | not know about implementation) and forget the check. Sometimes I use exceptions for flow control rather than returning null.

> If any of the method executions inside raise an unhandled NPE, you'll catch it outside, which is probably not what you wanted to do.

Yes. That will be bad. In an ideal world, given foo().bar().baz(), there shouldn't be a NullPointerException inside foo/bar/baz given how I am using them. If there can be one, I shouldn't be chaining them that way. But it's hardly an ideal world.

I continue to be confused by how Go gets positioned. From the article:

"Go is a programming language designed by Google to help solve Google's problems, and Google has big problems. The hardware is big and the software is big. There are many millions of lines of software, with servers mostly in C++ and lots of Java and Python....And of course, all this software runs on zillions of machines, which are treated as a modest number of independent, networked compute clusters"

The author moved effortlessly from "Google's problems" to "servers" that are "networked compute clusters." That's quite a leap, because it's certainly not true that "all this software" runs in that manner. Android does not, Chrome does not, Chrome OS does not, the iOS YouTube app does not, Google Earth does not, etc. And there's plenty of not-big hardware that Google writes for, like laptops and phones.

And the reason I bring this up is because Go was explicitly pitched as a systems programming language - see the PDF http://www.stanford.edu/class/ee380/Abstracts/100428-pike-st.... Programming tools, IDEs, web browsers, operating systems ("maybe") are among the tasks specifically called out as suitable for Go. But now this article tells us that Go was designed for "large server software".

So the question is, what changed? Was Go found to be unsuitable for tasks like web browsers? If so, in what way?

Nothing changed. The complete list from the linked PDF:

  - web servers
  - web browsers
  - web crawlers
  - search indexers
  - databases
  - compilers
  - programming tools (debuggers, analyzers, ...)
  - IDEs
  - operating systems (maybe)
Of that list, "web servers" and "search indexers" are both the kinds of "large server software" mentioned by Rob in this paper. (Most Google programs are effectively web servers of some kind, even if they don't speak HTTP directly.)

Sorry if our "positioning" of Go is a little confusing at times. We're programmers, not marketing people. We didn't set out to "market" Go at all, and had we set out with a clear message from the beginning we might have done a better job at it.

This paper best describes the initial motivation for Go's existence. However, like any general purpose language, its uses extend well beyond its design goals.

Thanks for your reply!

Go was originally presented as a language for servers, system software, and application programming. The last two were what got me excited, because server-side languages are already well represented, but there's not many languages you could consider writing a web browser or an IDE in.

However, from my perspective, Google refocused Go exclusively on server software. For example, at Google I/O 2011, the talk was "Writing Web Apps in Go." and in 2012, Go was organized under the "Cloud platform," and Rob Pike's talk was about "high performance network services." Support for Go on App Engine appears to be a higher priority than fixing its known problems on 32 bit. Rob has also discussed how he was surprised that Go attracted more interest from Python and Ruby programmers, who mainly work on server software, than from C++ users (and had some not-very-kind words to say about C++ users). These are some of the reasons I'm discouraged about Go's future as a systems and application language.

In the end I guess you give users what they want, and if Go is mainly interesting to people writing servers, so be it. Keep on rockin' (but I reserve the right to be disappointed).

You're reading tea leaves.

The Go team are not a homogenous group of programmers who can do anything. Each of us has our own particular areas of expertise. That's one of the reasons we work so well, but it also means that prioritizing tasks is complicated. Without going into it too much, the idea that we prioritized App Engine over "the 32-bit issue" is a nonsense.

Incidentally, that issue, to the extent that it was experienced by some users, will be resolved entirely in Go 1.1. This is timely as a lot of interest has developed in the ARM port recently.

Rob Pike's talk at I/O 2012 wasn't about "high performance network services," it was about concurrency. Network servers are a familiar context in which to understand concurrency problems.

There's no "refocusing" going on. It's just that we're putting our best foot forward, and right now that's in server programs and tools. Writing applications (and I'm assuming you mean non-web applications) requires good UI toolkit support. That's not something we're working on, but it is something that others in the community are doing. We all believe Go will make a wonderful language for writing native apps, it's just not there yet. (And I suspect Rob does in particular, having built Inferno with Limbo, which used Go-like concurrency to great effect in its UI APIs.)

It will come.

The reality is that people working on non-google projects keep leaving because the community is being shaped by some toxic personalities. We get hooked by the evangelism, buy the party line, build a proof of concept, become disgusted with the community, leave, and concurrently realize that C++ is the way forward.

Wow, you are the first such example I've heard of. Please email me adg@golang.org. I would be very keen to hear about your experience with the community.

No thanks, Andrew. I've already contributed enough and I really don't think I owe your company anything at all.

I find it rather rude to publicly say bad things about a community and then not backing it up with at least a subjective detailed report of what happened.

And I find it rude to make sudden demands of strangers. Maybe we can agree to disagree.

Not a demand, just a request. Your "no thanks," is ok with me.

You wouldn't be helping the company (perhaps indirectly), but rather the community. That's my primary concern.

Languages seem to find a "killer app" much like most technologies and then tend to be used primarily for that purpose. Could Go do all of those things? Sure, with enough time, resources, and libraries, it could. But Google has a problem now (in scale) that Go excels at and can provide value now, so they use it for that.

That said, I would guess that most of Google's code is still C++, Python, and Java.

There's two senses of "scale:" large software, or software running with large data sets. The article uses the first notion: "at scale, for large programs with large numbers of dependencies, with large teams of programmers working on them."

So Go was designed for large software systems, and maybe it's really good at those things. Or maybe not, because nobody's written one yet. The article says so: "We don't have enough experience yet, especially with big programs." So Go is unproven in this way.

Speculatively, my guess is that it won't be able to "at scale" its current state. For example, Go errors out with unused variables, imports, or indeed for all warnings. Imagine trying to make a piece of software like WebKit or LLVM compile on all of the platforms it supports, with all of the compilers they support, without introducing a single warning (or extraneous header or unused variable) on any of them. It would be a maintenance nightmare. You need some flexibility - demanding perfect hygiene for every cell on an NxM matrix of supported platforms is infeasible.

In the other sense of scale - large data sets - Go apparently works well enough to be involved in YouTube. That's cool and is a credit to it.

You're saying that if Go had the same design flaws as C/C++, then it wouldn't be possible to write LLVM in it.

But Go doesn't most of the design flaws you mentioned. For example, there are no multiple Go compilers wildly diverging in what they understand. That's unique to C++. In Go, like in Java or C#, a compiler either implements the whole spec or it cannot be called a compiler.

Go also doesn't have the portability problems that have accumulated in C/C++ over decades.

Go doesn't have macros (which enable the mess that C/C++ code finds itself in).

Go doesn't have manual memory management, pointer arithmetic, array access is always bounds checked etc. - it's radically safer language than C/C++.

I also fail to see the logic connection between unused variables or unused imports and any of that.

Or how fixing the error caused by unused variable is a "maintenance nightmare". Count the number if #ifdef statements in boost and then let's have a discussion about what is and isn't a maintenance nightmare.

> For example, Go errors out with unused variables, imports, or indeed for all warnings.

Go doesn't have warnings. The Go compiler errors out on compile errors. Unused variables and imports are two things that people typically assume should be warnings, and are surprised to find them as errors.

So I don't think your fears are well-founded here. Go is a much stricter and clearly defined language than C. There are far fewer edge cases and ambiguities. The spec is much easier to implement, and indeed there are two complete compiler implementations that agree with each other and the spec.

That WebKit or any other large C project could produce no warnings is, of course, hard to imagine. C compilers emit warnings like... well, I can't think of a clever simile. But the point is, it's totally normal for C programs to emit tons of warnings. But ANY Go program (valid or not!) can never produce any warnings. Ever.

Slide 14 is particularly important for new Go programmers. Go's garbage collector does not have access to certain optimizations because of the requirement to support pointers to internal members of structures and arrays.

The loss of these optimizations are compensated by nesting structures instead of just always using pointers to imitiate pass by reference semantics found in other GC'd languages, such as Java or Python. Using them correctly can reduce the number of objects to be collected and analyzed.

From the article:

"In 1984, a compilation of ps.c, the source to the Unix ps command, was observed to #include <sys/stat.h> 37 times by the time all the preprocessing had been done. Even though the contents are discarded 36 times while doing so, most C implementations would open the file, read it, and scan it all 37 times. Without great cleverness, in fact, that behavior is required by the potentially complex macro semantics of the C preprocessor."

"Great cleverness" apparently means "the ability to detect that a file has a standard 'include once' pattern and only process it once". It hardly seems like rocket science.

No, that's called "dirty nonstandard corner cutting that will cause you to curse up a storm at $VENDOR who wrote their compiler to be fast but it breaks on $POPULAR_LIB which gets cute with the preprocessor, and yes they shouldn't do that but are you going to rewrite it?"

That's incorrect. Optimizing the guard macros works in all cases as long as you use the standard pattern and the C Preprocessor memorizes the name of the guard macro.

How does this happen then?

"The construction of a single C++ binary at Google can open and read hundreds of individual header files tens of thousands of times."

An excellent question which deserved an answer in the paper.

Walter Bright (who created D to solve some of the same problems Go aims to solve) wrote something a few years back which touches on this topic:


Off hand, I can imagine problems with: multiple guarded blocks at the top level, things outside guards, things that look like guards but aren't (#IFDEF conditional sections, perhaps), things that don't look like guards that are, things that affect which guards are switched on in a dynamic way (detect a feature and enable a lib)...

Some of those could be non issues. There could be other issues I lack the experience to guess. It's no use saying "so don't do that" - not all code is under your control. Honestly if it were simple, it would be solved.

As I mentioned, the cpp needs to recognize the standard pattern and act on it. If you have multiple guards at the top level, you are trying to do something clever which cannot be optimized -- and also cannot be expressed in Go.

You mean it cannot be expressed using Go's (non-existent) preprocessor. I am relatively confident that you'd be able to express it in Go (perhaps sacrificing speed, in the process).

I can see a reason why the potentially complex macro semantics of the C preprocessor would require this behavior, and why you would certainly need to re-evaluate certain macros you had already processed whenever it was included. Doing that without impacting any of the preprocessor's functionality? Great cleverness - for really no reason than faster compiles - which isn't that big of a deal to most people.

There's plenty of magic that can be done in the C preprocessor that makes it sometimes possible to #include a file twice with different effects than #include-ing it once.

#pragma once and Objective-C's #import declaration both do that, though.

1984 was a long time ago, I think most/all compilers ended up adding this optimization at some point.

It is somewhat complex to do it correctly, see: http://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html

Someone can still throw in an #undef __YOUR_HEADER_H__ somewhere, and while that may be playing with fire, the spec says that the header file should be included again. There's so many corner cases (and ifdefs are used for so much more than guards) that you really do need to reevaluate it every time.

It really doesn't seem that complicated. When reading an include file the first time, check if everything is enclosed in a preprocessor conditional, one without else-branch. If so, remember the condition and re-evaluate it before opening the file again.

And if you're looking for rocket science you clearly missed a section.

Here is a recent Mozilla talk about the Firefox build system. The speaker, Gregory Szorc, provides LOTS of data about the performance of parallel and recursive makefiles and Mozilla's investigation of new build tools.


I enjoyed this article. Pike is an advocate for the language, but he openly discusses the trade-offs that were addressed by the language. Too often, the downsides to a language/technology are ignored, leading to technically boring talks.

"For the record, around 1984 Tom Cargill observed that the use of the C preprocessor for dependency management would be a long-term liability for C++ and should be addressed."

And in 2012 Apple tries to address it: http://news.ycombinator.com/item?id=4832568

Go is 3 years old. Also, golang addresses more than just this issue, the concurrency approach is interesting for example.

D is 11 years old and solves the exact same problems plus it uses a more traditional approach to objects.

The big difference is Go has Google+Pike pushing it and might actually get widely adopted. Getting some improvement is not a bad thing.

The big difference is Go has Google+Pike pushing it and might actually get widely adopted.

I don't think it is just that. There is also a difference in culture and tradition. Roughly, Go fits more in the C and Python tradition, D and Rust fit more in the C++ and Haskell tradition. Simplicity versus formal correctness.

I think the Python/C pond is just larger than the C++/Haskell pond.

Considering one of the goals of the language is simplicity, I don't see how D has solved this problem.

Because everyone else doesn't want to throw away the last 20 years of language design.

Go's search for simplicity is seen as a solution looking for a problem in the language design communities.

Go's search for simplicity is seen as a solution looking for a problem in the language design communities.

But most programmers are not in language design communities. There is a big potential for a 'safer C' or a 'faster Python'. Yes, that's a simplification, but also how people will see Go.

> 'faster Python'

Why throw away Python's capabilities to execute it faster? Just make use of PyPy.

PyPy is geometrically about halfway between CPython and Go, performance-wise.

For the type of applications Google is showing off with Go, I doubt it matters.

I do not mean this as an attack but have you written much actual "real" Go code?

There are several reasons I'm interested in D and more so Rust, but they're very different than Go.

I do not mean this as an attack but have you written much actual "read" D code?

See how this never ends?

Not really given that I think you're missing out on the advantages of Go, looking for the wrong things and generally missing the point of the article entirely.

And yes, I have. Hence my comment.

Er, you weren't the GP, sorry, but my point remains, though somewhat directed a cjensen.

I found this article really interesting, though I would like to see more information on the design decisions for the syntax (i.e. types/return types come after). They did say it was easier to parse from a human and computer perspective, however personally (being used to C based languages I guess) I find it a little more difficult to parse. I guess I'd just like to see some references to this rather than insinuation. Anyone have any?

Excellent - thank you for that.

I'm not sure I completely agree with their reasoning: the example only really got complicated when dealing with function pointers, however that could be simplified with a macro (or a delegate in other languages).

I do agree that C can get complex to read, however I'm not sure switching the type/parameter option is the answer. But then again, I've probably become accustomed to this format. The real test will be using the language for a few weeks... I better start some small hobby projects in Go just to see if I end up liking it!

Go doesn't have macros, as they're a whole other can of worms, so the simplification one might use in C doesn't apply to Go. :-)

Go has great support for closures, so you tend to use function types more often (or perhaps just more naturally) than in C.

It is just a matter of getting used to it. I switch back and forth between Go and other languages, and it doesn't bother me. (The thing that does get me is forgetting to use semicolons in other languages.)

Thanks again, I really appreciate your input! I'm going to have to switch between the two to get a feel for it (and like you say, get used to it). I'm a big fan for language grammars (I enjoy writing them myself) so am looking forward to digging into it some more to see how it all works :)

    int *i, j;
is more than a good enough reason for me. Makes interviews interesting, though: "Why are you repeatedly writing your variables and function parameters incorrectly?"

Interesting example. I'm still a newbie to Go, so would that be written as:

  var i int*
  var j int
My apologies if this seems like a simple question: I haven't played with the language enough to know :)

[EDIT: Sorry, I didn't realize how to format code!]

The asterisk goes before the type for pointers:

    var i *int
Unlike C, values are always initialized in Go.


I think coming from a C-based language background has the same effect for me, some of the declarations look much more difficult to parse but that could just be from long term exposure to one way of doing things.

What gets my goat about this paper is the amount of misleading information in it.

Re-parsing files multiple times is a solved problem using the standard header guard macro pattern combined with a slightly intelligent C Preprocessor. Yet quotes about the horror of header blow-up pervade this paper as if this is an exciting new problem that has been solved. It isn't; it was solved more than 20 years ago.

Yes, the guard macro pattern works. But it's something programmers manage, so it's a band-aid rather than a solution.

If you change a header you need to recompile all files that include it, even if the include was conditional. Now your build system starts to look inside files rather than just check timestamps. Similar tool bloat happens over and over again in large codebases.

I'm not convinced Go has the right solution. But it's disingenuous to claim that it's competing with the C preprocessor. It's leagues better.

You should enumerate all the information in the paper you consider to be misleading.

Can you name a language developed since the 80s that hasn't also solved this problem?

From the article: "It's worth mentioning that this general approach to dependency management is not original; the ideas go back to the 1970s and flow through languages like Modula-2 and Ada. In the C family Java has elements of this approach."

Python? Ruby? JavaScript?

There are many more, but I name these three as they are among most popular.

How has Javascript even addressed this problem?

i am kind of surprised that no one has brought up "#pragma once" or even external-include-guards (as proposed by jon-lakos). both seem to drastically cut down build times...

Why then does every nontrivial C or C++ code base I've ever worked on take 5 or 20 or 60 minutes to compile then? Do you honestly think it should take 5 minutes to compile 1M lines of code?

The linker AFAIK also has bottlenecks, but with today's computers, and thinking about what a C compiler actually does, there's no reason that you shouldn't be download (say) Python, Perl, or Ruby and compile them in 3 or 4 seconds. All of those codebases are 1M lines or less.

On compilation of C programs:

On my unremarkable but not slouchy three-year-old workstation that possesses rotational disks and about 6GB of buffer cache. This cache has not been pre-warmed either, but I can't guarantee it is totally cold. There are with compiler optimizations on, too.

    $ cd ~/codes/postgres
    $ make distclean
    $ make -sj10
    All of PostgreSQL successfully made. Ready to install.
    247.83user 14.22system 0:46.26elapsed 566%CPU
Depending on how you count it, this is 600,000 to 1M LOC.

Let's try complete Linux compile without the drivers, via compiling user-mode linux. This is normally sufficient for debugging file systems or the memory management system (it's really a godsend for that), for example:

    $ cd ~/codes/linux env ARCH=um time make -sj16
    LINK linux
    321.04user 23.83system 1:02.49elapsed 551%CPU
All in all, I have always found the complaints about performance and header files in C programs (note: not C++) either outmoded or unconvincing. There are plenty of legitimate problems -- including tricky issues with semantics -- involving the preprocessor, including something as simple as figuring out when it is safe to remove or reorder a header include! There's no need to imagine problems. This meme about how terrible compile times for large C programs is needs to be put out to pasture, or maybe I just need to be exposed to code bases about ten times the size and feel the need to not use incremental builds.

Thanks for actually posting some numbers, but to me this shows the problem and not the absence of it.

It looks like you have 6 CPUs, and say you have 2e9 cycles per second on each of them. 46s * 6 * 2e9 / 1e6 LOC = 552,000 cycles to compile a single line of code on average. That seems 1-2 orders of magnitude off, no? When you had a 10Mhz computer, did you compile at 20 LOC per second? There's a scaling problem here.

(That's ignoring disk access, but if you did it multiple times I doubt it would go more than twice as fast the second time. And to read 1M LOC from disk cold I'd guess should take on the order of 500ms anyway.)

Is what you're saying is that those compilations are fast enough? You wouldn't want it to take 1 or 3 seconds and see no reason why it should?

Also, Postgres and Linux I'm sure have a LOT better physical structure than your typical industry project. Most industrial projects are bound by talent, I would say. If choosing Go gets rid of this problem, then I would say that's a definite advantage on the side of Go, to be considered when choosing a language for a new project.

And it's true that incremental builds are more important, but on projects I've worked on they take 10-60 seconds for a 1 line change in a .cc file and much worse when you're changing headers.

> Is what you're saying is that those compilations are fast enough? You wouldn't want it to take 1 or 3 seconds and see no reason why it should?

In most practical cases (incremental rebuild) it does take 1-3 seconds. There is just not a huge amount of practical benefit to me. If one has interest in things taking 1-3 seconds for a full rebuild provided one has compiled the code before, there's ccache:

I had to install ccache (reason: this doesn't even cramp my workflow enough to bother until doing this benchmarking) and just do this:

    env PATH=/usr/lib/ccache:$PATH time make -sj10
    All of PostgreSQL successfully made. Ready to install.
    6.05user 1.88system 0:02.48elapsed 319%CPU
If I don't cheat by using ccache, then turning off the optimizer gives me about 50% of my time back:

    120.00user 11.72system 0:21.75elapsed 605%CPU
Here's the result of touching one c file in the executor and doing an incremental build (including linking):

    0.74user 0.18system 0:00.68elapsed 137%CPU
A randomly chosen header file gives me about two seconds, with --enable-depend on:

    3.33user 0.51system 0:01.53elapsed 249%CPU
I like Go, and appreciate that it compiles very quickly in some absolute sense, and for many other reasons, and do not wish for a hideous preprocessor system, but to me claims against the time it takes to compile a reasonably large C program are dubious enough that is can only lead to overzealous suspicion by parties that have to make a quick evaluation on what to spend their time with.

> they take 10-60 seconds for a 1 line change in a .cc file....

.cc is another kettle. Just as the disadvantages of .cc should not be lumped with .c, the opposite also has to be taken into careful consideration: some advantages of .cc are not available to .c, and some advantages of both are retained in .go.

Also, ./configure and Postgres's 'initdb'. Now that's slow, in spite of some efforts to speed up the latter.

One correction: assuming the disk access was for files scattered around on the drive, requiring a seek each, you're looking at 10ms per file. At 1,000 files (wild guess) that's 10 seconds.

For the record, the size of Postgres is at least an order-of-magnitude off from what counts as a "large" C++ program, at least as far as Google is (and many others are) concerned.

Try compiling Chrome (or just WebKit) sometime. It can take well over an hour on a beefy workstation. Incremental compiles are, of course, a lot faster than this, but a long way from "interactive" by any reasonable definition.

But wait, there's more! That bizarre crash or link error you're getting on an incremental compile? Yeah, it's the result of some screwy preprocessor problem that no one's taken the time to debug. You could try and get to the bottom of it yourself, but that would take somewhere between an hour and a couple of days. So instead you suck it up, clean the build output, start up a compile, and go grab lunch. This is the day-to-day reality for WebKit developers, and I don't accept that we can't ultimately do a lot better.

Large server-side C++ programs at Google suffer the same problem, because there's an enormous amount of shared code to compile. It's not as much of a dependency and script/preprocessor rat's nest as WebKit, but it can still get really slow, and you still occasionally have to give up and clean/rebuild when things go inscrutably wrong.

Any assertion that C++ header hell is "good enough" and/or "not that bad" flies in the face of this reality.

> Why then does every nontrivial C or C++ code base I've ever worked on take 5 or 20 or 60 minutes to compile then?

Because if you have a global.h file that 100 c files include said global.h will be tokenized, parsed and compiled 100 times, even with ifndef guards optimizations working. Along with everything global.h itself may happen to include.

Yup, so then I think the advantage of Go is real (contradicting the OP). I mean it's not really something that's so awesome or innovative about Go -- it's just that C and C++ are so unbelievably backward in this regard.

On the HN and reddit threads about the proposed C++ module system a couple weeks ago, everyone said "this needed to happen 20 years ago", and that was absolutely true.

Yes, there are ways around it, and the paper discusses one way Google addressed it. But that's not the point. It's used as just one example to illustrate why C/C++ is not ideal for working at Google's scale.

I agree there's much more to the paper, and it's well worth the read. Bounds checking, avoiding pointer arithmetic, garbage collection, and concurrency all make sense at Google's scale.

I just think the bashing was misguided in an attempt to better justify adding a subset of Ada to C

I think it's misguided to try and characterize Go as the addition of a subset of Ada to C. ;-)

In file A, #include "foo.h", parsed once. In file B, #include "foo.h", parsed once (again). In file C...

Include guard optimisations do not solve this problem.

I've been reading about Go a lot lately, starting with Effective Go, and now the tutorial book Learning Go. (and periodically referring to the FAQ when I'm stumped by design decisions)

It's been enjoyable learning it, though I'm yet to write any code in anger with Go. I find some of the concepts around concurrency and OO design have really made me think about the way I write in other languages.

One thing I cannot quite make sense of though is the treatment of errors/exceptions. This article says: "Exceptions make it too easy to ignore them rather than handle them, passing the buck up the call stack.."

But the example given seems to me to do exactly that:

if err != nil { return err }

Worse still if you forget to do something with err, or don't handle a specific err !=nil situation, it seems easy for control to flow past the error handling and into code that expects no errors.

Are there small, easily understood open source projects written in Go, that would provide tangible examples of the benefit of this error handling approach?

The learning materials thus far have not really convinced me, maybe code in the wild would.

I think the criticism with exception handling is that it's a bad idea to silently pass errors up the stack by default. This encourages deferring your problems to outer scopes, at which point you often aren't sure precisely what the error means (or you may not have known to expect it at all).

With Go, although you can pass error codes up the stack if you want, you have to do it explicitly. This forces you to think twice before doing so.

This is a key difference: exception handling subtly encourages ignoring errors and letting outer scopes handle them, while error codes force you to deal with your problems within the immediate scope, and then decide what to do from there explicitly.

Eh, I don't think this is a good characterization of exception handling. It's certainly an accurate portrayal of exception handling in some languages---like Python---but in a staticly typed language like Java, I believe exceptions are quite explicit. (i.e., if you don't handle an exception, or explicitly state that you are ignoring it, the compiler will complain.) Although it's been a while since I've used Java, and if this is no longer true, I'm sure I could find a language in which it is true.

Personally, I would be OK with that kind of exception handling. I also enjoy Go's error handling. I think Python's exception handling is abhorrent and it constantly bites me in the ass.

It's been a while since I've used Java too, but I don't remember it requiring try..catch around ALL code. If you don't catch immediately, it defers the error to somewhere up the stack. This is what I meant by "ignoring" exceptions being the default behavior.

Shoopy gave the key term: checked exceptions. Your grandparent criticized unchecked exceptions, and gave praise to Go error handling for features found in checked exceptions.

There are obviously still non-trivial differences between checked exceptions and Go's error handling, but they aren't as world shattering as unchecked exceptions.

If you don't catch a checked exception in your code, then you need to pass the buck to your callers, by adding a "throws" clause into the method signature.

The idea, as I understand it, is to use panic/recover in truly exceptional cases to unwind the stack and handle error conditions, but to never do so across package/api boundaries.

So essentially, use panic/recover as you would a much nicer version of longjmp.

(I believe) the only thing panic/recover lack compared to a more traditional try/catch is typed catching, but idiomatic usage is much different.

Honestly, I really like it. The typed-catching would be nice but the idiomatic usage of exceptions in languages like Java has always rubbed me the wrong way. Exceptions crossing package/api boundaries just seems wrong to me.

> (I believe) the only thing panic/recover lack compared to a more traditional try/catch is typed catching, but idiomatic usage is much different.

You can use reflection with recover to do it, the upside is that it's a more general mechanism the downside is that you need to manually rethrow exceptions you don't handle.

BUT the bottom line is that the standard library doesn't throw exceptions. What you do in your code is really up to you, if you really think panicking across api boundaries is the right thing to do you can, there is no go police coming after you.

To expand on that: one of the typical occasions you could use recover is in web applications, where you want a server to continue running even if something exceptional happened during the handling of a request.

> but to never do so across package/api boundaries.

There are good reasons to panic across package boundaries. See my other comment. [1]

[1] - http://news.ycombinator.com/item?id=4879994

I don't think you'll be convinced until you have worked on a Go project of your own. Even then it might take a little while to fully see the point.

Go's error handling model is simpler than exception handling. There are fewer surprises. That's pretty much it.

Do you know whether any calls to the standard library panic over the package boundary?

Edit - Things like regexp.MustCompile aside.

No panics over package boundaries. Breaking this is almost sacrilegious in Go.

This really isn't true. A better characterization is "only panic in exceptional or unrecoverable circumstances." A perfect example of this is programmer error (i.e., violating a function's contract). It's perfectly reasonable to panic over package boundaries in this case.

Yet the standard library does just that? Only on package regexp or elsewhere too? I'm not trying to say I know better than the designers, just that the waters are muddied by supposed rules being broken off the bat.

Regexp.MustCompile is for initialisation of global variables that is done before main starts. You need a different function than regexp.Compile because you can't process the error at this stage or level. The panic is for exiting the program, since you can't do anything else. It isn't destined to be recovered from.

I understand why that's useful. Now, is it safe to say that nothing else in the standard library intentionally surfaces a panic? I'm not trying to troll: I genuinely want to establish this!

You have been mislead. Better advice for panicing in Go is to panic in truly exceptional or unrecoverable circumstances. For example, a programmer error (violating the terms of a function contract) is a perfectly reasonable justification for panicing across package boundaries.

The Go standard library:

    grep -nrHIF 'panic(' /opt/go/src/pkg/* | wc -l
Note that this is somewhat inflated with a lot of `panic("unreachable")` calls. They are a symptom of the standard Go Compiler requiring a `return` on every code path of a function that returns at least one value. A `panic(...)` relaxes this requirement. In this case, a panic arising would indicate a bug in the package. Which is another good reason to panic across package boundaries :-)

Alright, that makes more sense to me.

Do you know if they intend to fix the need for phoney code paths? It seems odd that a language with nice support for first class functions and closures can't acknowledge when "if ... else ..." has complete flow coverage.

> Do you know if they intend to fix the need for phoney code paths?

It's a purposeful restriction imposed by the Go compiler, but not by the Go language specification.

And I think the idea is to encourage explicitness, and force cases like these:

    if cond {
        return ...
    } else {
        return ...
Into a simpler:

    if cond {
        return ...
    return ...
Reasonable people can disagree over whether this is a Good Thing.

Panics serve two purposes: to bring down the program in case of an invariant violation, and to reset a state machine, like setjmp/longjmp in C. See the template package to see what I mean by the latter. The former should not be recovered from. The latter should not cross package boundaries. If they are, it's a bug because only the package author should reset that state machine. Packages should be imune to malicious input. There should be no invariant violations if you supply unexpected input. If you get a panic from a package, it means there's a buga and you shouldn't try to recover from it.

Thanks. I just wonder whether this fairly nuanced convention is or will actually be adhered to by library authors.

I find the guard macros discussion in this post most amusing. Just use Go and there's no point in talking about guard macros! =P

The dependency story becomes significantly less compelling if something like this proposed module system for C/C++ takes off: http://news.ycombinator.com/item?id=4832568

I wonder sometimes if Go will become to Google what TAL was to Tandem. A language uniquely suited to solve problems unique to the design space in which they operate. In Tandem's case it was non-stop computing, in Google's case its kilocluster distributed algorithms.

I really appreciate watching it evolve.

I wrote a fully featured X window manager in pure Go [1] that I use full time. And it was pure joy compared to doing the same in C.

[1] - https://github.com/BurntSushi/wingo

Go is really too general for that to happen. From my (still limited) experience, Go is a safer C with garbage collection, interfaces and goroutines. What I like about Go is that the authors did not over-engineer the language and is pretty much as simple as C as a consequence.

Obviously, things like the garbage collector and bounds checking make Go slower than C. On the other hand, although the Go spec does not mention the stack or heap, you still have pretty much control over memory layout. E.g. making a slice of structs will be backed by an array of (unboxed) structs, pretty much as you'd expect.

Doesn't apply here, Go is largely as flexible, generic and versatile as C, minus a few restrictions and plus a few (awesome) additions. And the CGO interface lets you bind C libs as needed.

I write OpenGL apps in Go. Surely this is in no way characteristic of Google's internal use-cases.

Today's machines are multi-core and are not as bare-bones as to require manually carefully coded malloc/free calls. That's what's changed since C and that's what Go is (very well) designed for. Then a few syntactic goodies that don't imply performance side-effects or any black magic but that C didn't have: duck typing, struct composition, interfaces, all that.

>Go is a programming language designed by Google to help solve Google's problems, and Google has big problems.

I believe that this part of the speech is wrong, or overstated (and Rob did it on purpose). There ARE actual texts about how Go came to be, and none of it involves Google, as a corporate entity, wanting something particular and asking for some specific set of requirements, the way, say Ada was designed by the defence departments, of Erlang might have been designed by Ericsson. The story, as told also by Rob Pike elsewhere, speaks of some guys getting together t hack on a new language on some pain points THEY had, and mostly as a side project.

And, really, Go was never put front and center by Google, or that Google seems to care a lot about it and advertise. They've done far more for V8 and even Dart, including building a large team of full time paid compiler guys to work on those, whereas the did nothing of the sort for Go.

Not much adoption at Google either: some mysql load balancing service in YouTube, a caching layer for Google downloads, a Google doodle served by Go, and beta support in GAE pretty much sums it. Sure there should be some other small projects, but nothing earth shattering, the way C++/Java are used.

I think you're fundamentally misunderstanding the way these kinds of things happen at Google. Most projects of this nature have to work to get internal adoption just like any external toolchain. There's almost never an "edict from on high" that a project will use a particular technology (except to the extent that most code in production has to be written in a set of approved languages; a language created by Google itself still has to surmount this hurdle).

Pike, along with other Go team members, have been heavily involved in solving very large-scale problems (e.g., http://static.googleusercontent.com/external_content/untrust...) for a long time now, so they are solving the pain they've personally experienced. But other teams at Google will treat Go just like developers in the world at large do -- with skepticism, until it's proven itself. It's doing so now, but it will take some time before you see heavy adoption, which is the way it should be with any new system of this magnitude.

>I think you're fundamentally misunderstanding the way these kinds of things happen at Google. Most projects of this nature have to work to get internal adoption just like any external toolchain.

Well, Dart and V8 are too examples to the contrary.

Not really.

V8 was simply a drop-in replacement for existing Javascript VM's, so there was no "adoption" curve, per se. It ran something like 40x faster than the one in WebKit when it was released, so everyone just said "Awesome, thumbs up! Stick it in there."

Google is spending some developer relations resources promoting Dart, probably because it's the kind of project that requires more public buy-in (including a VM that ultimately needs WebKit hooks, which is not solely controlled by Google). But it still has to surmount the same hurdles to internal adoption that other tools and languages do. I'm no longer privy to the details, but I'm fairly certain that there are just a few internal teams trying out Dart right now, to help it get its "sea legs".

And that's really the way it should be. These things don't happen overnight, and edicts from on high that a particular team will use a new technology don't usually work out well.

>V8 was simply a drop-in replacement for existing Javascript VM's, so there was no "adoption" curve

I never said anything about adoption curve. I'm talking about money spend by Google on the project and promotion done by them on it. V8 had a large team assembled, with a star compiler guy, was heavily promoted with marketing material, website and even several videos.

Of course V8 was part of their plan to take over the web browser, whereas Go is just a language for them. Well, my sentiments exactly.

> Sure there should be some other small projects, but nothing earth shattering, the way C++/Java are used

Go is an incredibly young language - it only just received its first official release a few months ago, and yet it already has two complete and independent compiler implementations (both gc and the more optimized[1] gccgo), as well as a staggering number of libraries for most common tasks.

As someone who enjoys toying around with developing languages, I've been blown away by how solid the entire Go ecosystem is at this early stage.

> The story, as told also by Rob Pike elsewhere, speaks of some guys getting together t hack on a new language on some pain points THEY had, and mostly as a side project.

I think every language (or every project) has as many stories of its origin as there are creators - if not more. They're not mutually exclusive, as you seem to make it look.

[1] in that it takes advantage of gcc, which is incredibly well-optimized for non-concurrent applications, that is

llgo (an llvm go compiler written in go) is also happily on its way to being a third compiler. Andrew Wilkins recently made a statement that he was able to get some Go code running in Chrome via PNaCl. Pretty exciting stuff.

I'd bet Pike, Thompson and Griesemer, as Google developers themselves, knew Google's problems for developers even better than "Google as a corporate entity" would have. It's true that they weren't asked by anyone upstairs to face these problems, but that's how Google works.

What is with all these Go articles? I wish Go would Go the way if the DoDo bird. Java rocks client and server, web and think client.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact