Hacker News new | comments | show | ask | jobs | submit login
Go 1.1 RC1 is out (groups.google.com)
218 points by bockris 1721 days ago | hide | past | web | favorite | 117 comments

I'm glad that escape analysis now allows for more natural if/then control flow.

But I'm still unhappy that using named return values still requires you to put a superfluous "return;" at the end of such a function.

The whole reason I use named return values is to cut down on the boilerplate code that carries the return value around -- so why not cut out the final boilerplate 'return'? It doesn't solve any problem or add any information!

Otherwise, there are some nice improvements here. Other than the things mentioned so far, I'm glad to see reflection filling out -- the day a Go REPL will be possible is approaching.

The final return does add information. It is common to name return variables to make the meaning clear to prospective users, without using them in the body. Just to pick the first one I found, here is the definition for regexp.Match:

  // Match checks whether a textual regular expression
  // matches a byte slice.  More complicated queries need
  // to use Compile and the full Regexp interface.
  func Match(pattern string, b []byte) (matched bool, error error) {
      re, err := Compile(pattern)
      if err != nil {
          return false, err
      return re.Match(b), nil
If the final line is missing, the compiler rejects the function. You might well have meant 'return', which could be inserted implicitly, but you might also have been interrupted while writing the function and meant to write a 'return something'. The compiler (really the language) requires you to be clear to avoid inferring an incorrect completion.

One nit: the return rules are separate from escape analysis, which is about keeping things on the stack instead of allocating them on the heap.

1. Or maybe you were interrupted right after writing "return" :-).

This one weird simple trick could work (and give you white teeth): implicit returns only get inserted if all the named return variables are assigned-to at least once in the body of the function. Perhaps that breaks your pure syntactic rule, though.

2. Yup, wrong term.

> Other than the things mentioned so far, I'm glad to see reflection filling out -- the day a Go REPL will be possible is approaching.

Very close indeed! Struct, array and function types still can't be constructed at run time, but as of Go 1.1, function values can be as well as slice, map and channel types. I exploit some of this in my `ty` package. [1,2]

[1] - http://godoc.org/github.com/BurntSushi/ty

[2] - http://blog.burntsushi.net/type-parametric-functions-golang

This is pretty cool!

I tried to convince rsc at some point about the enormous value of REPLs for rapid development.. imagine connecting to an embedded REPL on a server to do live debugging. I don't think I quite convinced him to drop everything and do it himself, though.

Do you have plans to experiment with a transpiler to smooth away all the remaining type noise?

Maybe a first step would be to explore how one could add pluggable 'dialect translators' to the go tool so that anyone could write simple extensions for the language while preserving the existing toolchain.

> Do you have plans to experiment with a transpiler to smooth away all the remaining type noise?

I don't have any particular plans; the `ty` package was a night of hacking plus several days of polishing/writing. :-)

One of the major bummers about moving into the reflection world is performance. My blog article talks about it a little bit, but it's also worth looking more closely at the things I didn't talk about in the benchmarks. (For instance, it seems that function calls in the reflection world pay a very steep price.)

Re transpiler: do you mean a {Language}-to-Go source translation? If so, it seems like you'd want to avoid `reflect` completely in that case. But maybe I am misunderstanding.

> I tried to convince rsc at some point about the enormous value of REPLs for rapid development ... I don't think I quite convinced him to drop everything and do it himself, though.

A REPL would be very nice, but a REPL using `reflect` would definitely be a lot of work. You'd need to make extensive use of the sub-packages in `go` to convert the `Read` portion of the `REPL` into appropriate reflection types. You could do it now, but you wouldn't be able to define new functions, structs or interfaces. And `reflect` cannot spawn goroutines either, which is a bummer.

> ... the day a Go REPL will be possible is approaching.

A REPL is possible even today.

All is needed is an interpreter for the language and there are a few already available.

Nothing new, this is the same approach taken by many languages, provide an interactive environment for the REPL and a compiler for distribution.

Being done since the early days of computing.

Or an incremental compiler.

For a REPL workflow it usually does not work, because most developers are trying out pieces of the application.

When I work with ML or Lisp based languages, I am always copied code snippets, which might land in separate modules afterwards.

Unless you mean a JIT at the REPL environment.

Just a wild guess, not sure about the true reasons: it adds the information that the control flow can hit the end of the function and that it actually returns something. All that given the writer doesn't write syntactically superfluous returns.

Also I slightly disagree with you that named return values are for short code. This may be true in functions that return at many points. For other functions it may need some otherwisely added boilerplate. If you write inside an if-block f, err := os.Open(...) that err is a different one than your return value err. So you would need to add var f *os.Open above the if to write f, err = os.Open(...).

Having that said, IMHO the greatest pro of named return values is well understandable code, in particular for libraries.

1. I'm not sure I understand. If a function shouldn't hit the end, put a panic there. If you want to emphasize that it does, put an explicit "return;" in. But requiring you to put in a "return;" yields no information at all. Dead code isn't an error.

2. You're right, they're not only for short code. That however is where the forced "return" is most glaring.

I can see the benefit of the forced return, but only because it's bitten me in the past.

In Python, if you don't say `return` the function returns `None`.

That's less of an issue for Go as aside interface{}, fundamental types can't be nil. Plus the instance we're talking about here is with named returns, so the variables are already initialised.

From a personal perspective, I find having to include "return" a pain as there's times when it's completely unnecessary. eg in switch statements or if else where each condition has it's own return. While it's an easy fix for if else (just drop the else / default case), it makes the code look a little less pretty in my opinion.

The race detector sounds interesting:


I'll paraphrase something Andrew Gerrand said about this at a golang meetup last night:

Because race conditions are so hard to detect, the race detector is obviously prone to false negatives. Just because the tester doesn't find any race conditions doesn't necessarily mean that there aren't any. But the race detector never finds false positives. If it finds a race condition, that condition is very real.

You could just run your app with the race detector on all the time, but there is a performance cost to using the race detector.

One way to get around this in cloud/clustered environments is to deploy your app on a few machines with the race detector on and the rest with the race detector off. That way you're running your app with a production load and you're more likely to find race conditions, but you'll mitigate the performance costs associated with the race detector.

It makes sense to me that it can't possibly detect all race conditions but I had never really thought about the ability to detect any race conditions programmatically.

Running the detector on just a few nodes sounds like a great way to offset the performance penalty a bit. The docs on the race detector say that "memory usage may increase by 5-10x and execution time by 2-20x" which could be quite significant.

I also wonder about the effectiveness of randomly fuzzing your app with the race detector on as a form of testing.

I'll take a bet that detecting ALL race conditions is equivalent to the halting problem.

Certainly for an entire program it is... not sure if you limit it to the just the code paths executed.

It will always be inexact. All dynamic analyses of this type are, because they only observe what the program does as it executes and do not enforce any restrictions.

If you want a more guaranteed form of race freedom, you need to constrain what programs can do, either with dynamic restrictions on mutable state sharing (like Erlang or the parallel extensions to JavaScript do) or with a type system strong enough to reason about sharing of data (like Haskell or Rust).

We used this during a load test and it was able to find insanely small concurrency issues like integer increments, its quite amazing. If your doing go, use it

Based on ThreadSanitizer, which does the same for C++ (or similar) programs. Extremely useful.


Honest question: aren't the concurrency primitives in Go largely intended to make concurrent memory accesses like this an anti-pattern? If so, why are so many races cropping up that a detector tool is necessary/very useful?

> Honest question: aren't the concurrency primitives in Go largely intended to make concurrent memory accesses like this an anti-pattern

Intended, yes. But that doesn't mean we also develop "cleanly&properly" at all times, in the real world. It's a great "backup" for when formerly-prototyping-now-production code is getting slightly out of hand over time.

To expand on this point:

The Go-documentation has a couple of great examples [1] that could easily happen in the real world (before the first coffee).

[1] http://tip.golang.org/doc/articles/race_detector.html

If we're talking about wishlist items, I'd like a tool, any tool, for detecting memory leaks. I understand they can't use Valgrind itself[1], but as it stands, detecting where leaks exist can be very difficult.

[1] http://code.google.com/p/go/issues/detail?id=782

Profiling Go allocations typically makes leaks clear. http://blog.golang.org/2011/06/profiling-go-programs.html

If you are worried about leaking goroutines,

  import _ "net/http/pprof"
in your web server and then visit /debug/pprof/goroutine (or even just /debug/pprof). That listing shows all the active goroutine stacks, but it groups bunches with the same stack into a single entry with a count. Scanning the list it is usually easy to see leaks (hey, why do I have 5000 goroutines with that stack) and also why they are stuck (because you have the whole stack).

Outside of using cgo or working on the Go runtime (written in C), how are you leaking memory in a garbage collected language? Your code either has a reference to something, or it doesn't. If your code has a reference to something, how is a tool going to know you don't mean too have that reference?

Outside of the "dangling reference" issue above "leaking" in a GCd language is non-trivial. Here is a related Java discussion: http://stackoverflow.com/questions/6470651/creating-a-memory... . Also note Go doesn't have ClassLoader or class static fields.

You can leak memory by leaking goroutines. If a goroutine is waiting on a channel that nobody else has access to, it lives forever, as does the memory it references.

So, you can pretty easily leak memory without messing with unsafe things.

That is a pretty serious bug, that is not at all as easy to miss as something like a memory leak is in C/C++.

It is also pretty easy to detect by this (or something better), which might be handy in a big program: http://play.golang.org/p/XmKfgQ4TmS

I somewhat disagree. Even if you don't use channels as iterators/generators (which many folks do), it's not hard to end up with a goroutine blocked on a channel that'll never be closed/written to, and this situation (like memory leaks) can result from changes elsewhere in the program or branches not normally taken. A goroutine count doesn't seem like it'd be useful for diagnosing this in a non-trivial program. The runtime could probably detect if there are goroutines blocked on channels that no other goroutine has access to, and that'd be quite helpful for debugging, but as of now it doesn't. Even if it did, it couldn't catch all goroutine leaks.

>>A goroutine count doesn't seem like it'd be useful for diagnosing this in a non-trivial program.

Sure it is, if you are leaking goroutines you will see an every increasing count, even when your app is idle if the count doesn't return to the proper baseline then you know you have a problem.

If you start a goroutine should should have a plan for terminating it. If you don't have a natural way like the life cycle of handling a request then you need to use channels (defer/close are your friends), waitgroups, condition vars etc. I work on some fairly large Go applications and this hasn't been a pain point.

They finally fix:

  func hello() int {
      if true {
          return 0
      } else {
          return 1

  >go run func.go
  ./func.go:3: function ends without a return statement

It's nice that they fixed it, but I would always write

    func hello() int {
        if cond {
            return 0
        return 1

The reason I don't like this way is it can be harder to refactor. You don't want "return 1" happening if cond. For example, if you refactor to something like this:

    func hello() int {
       var result int
       if cond {
          result = 0
       result = 1
       // New code with result

       return result
Now, in this case, you don't want result to be 1 if cond, so you have to add the else condition. If you start with if-else, this is less likely to bit you in the future.

This particular bug just bit me in a bad way in production because I had what you have and and to make an quick production fix and did a refactor just like this and missed adding the else.

When you do a lot of I/O you always have err return values included. For that I find the style without else much more convenient.

In languages with ternary I would do this (I agree with Go's decision to remove it but in a case this simple, I'd use it if it were there)

  int hello() {
    return cond ? 0 : 1
In Go, I'd do this, but then I seem to like named return values more than most Go programmers...

  func hello() (res int) {
        if !cond {
            res = 1
Of course, coming from C/C++, it would have to be an extremely special case for me to have logic where "true" mapped to 0 and "false" mapped to 1, because that just seems wacky.

I disagree with your use of named returned values for something like that.


EDIT: Sorry, let me explain (I'm not an asshole, really!). I disagree with using named returned for things outside of signaling error/ok states (as explained by Andrew). I feel that our signatures should be written concisely for users of our API, not for our convenience.

Yeah, as do other other Go programmers I know of, which is why I said "I seem to like named return values more than most Go programmers".

I respect Andrew Gerrand and Brad Fitzpatrick quite a lot but I still often use named returns on even small functions. I find doing so usually makes the actual function code more concise and easily readable for me and I don't think the negative impact on the docs is significant. IMO auto-generated go-docs have far worse problems than the 'noise' from named returns, I think they suffer a lot more from core language decisions like the flexible interface system. And to be clear, I think the interface system in Go is brilliant and I love using it, but I also think it makes auto-generated go-docs hard to digest (and use as quick references) in a way that auto-generated OOP language docs (javadoc, doxygen from C++, etc) aren't.

Back when I was in college doing Java, Eclipse would throw up an error for unnecessary "else" statements. Ever since then I can't help but write it your way as well.

I've been taught by some pretty experienced engineers that in terms of readability, multiple return statements are a bad idea. Instead, you should conditionally set a return variable, and return it once at the end of the function. But I'm not sold.... what is HN's thought on this matter?

> I've been taught by some pretty experienced engineers that in terms of readability, multiple return statements are a bad idea

They were wrong and probably not as experienced as they/you thought. Guard clauses make code simpler and more understandable.

Guard clauses and single return statements are not mutually exclusive.

The whole idea of guard clauses in algol-derived languages (not functional ones) is to bail out immediately with an early return...


I understand that but you can have a macro wrapping a goto statement to a predefined label which will do that for you (and potentially set some errors). It's debatable whether this really gives you anything but I kind of like this style since you can replace the whole if statement with a single line something like check_memory(pointer);. The goto's become particularly useful if you want to have some cleanup done at the end of the function even if something during the function fails.

Now your assuming the language has goto. Which definitely is not the case these days.

Well the discussion is about go and go does have gotos. http://golang.org/ref/spec#Goto_statements

You were misled by engineers parroting the ideology they were taught in school. Else statements are far less readable than straight-line control flow with early returns.

Go discourages it http://golang.org/doc/effective_go.html#if

I think a blanket ban on multiple return locations is silly, as they can often be used to simplify code. There may be times when setting a return value is preferable, and I think you should use your judgement there.


The mentality of using a single return statement at the end of a function comes from languages that require memory management. In these languages if you return early you would cause a leak by not releasing your resources at the bottom of the function before returning.


With go's defer mechanic (defer f.Close(), defer l.Unlock()), and the way they handle errors, multiple returns are basically the way code comes out naturally. I think it's more readable than juggling a bunch more variables and returning at the end, others may disagree.

Logically, a simple if-else return tends to follow three basic forms:

    if x
      return a
    return b

    if x
      return a
      return b

    if x
      r = a
      r = b
    return r
Out of these I find the first is the most prone to maintenance errors. It's easy at a glance to see the final return, insert something in front of it, and miss that it needs to happen on another path. At least in the other two cases the indentation makes it clear that it's a conditional return path and you look for others.

I don't have a problem with a "throw" instead of "return a" in any of the forms because that's expected to be an aborted path anyway. In the case of two returns, maybe it is, maybe it isn't.

It's a small thing but when you read hundreds of thousands of lines of code, every little thing that makes it easier is worthwhile.

I prefer to avoid multiple return statements and follow the single-return-at-the-end rule.

However, I happily make exceptions for:

a) Simple shortcut checks at the top of the function. These tend not to increase the complexity of the control flow and can really simplify it.

    void free(void * p)
        if (!p) return;

        ... rest of function
b) Cases where it's just plain unnatural to do it any other way. This can occur with state machines and complex loops. When I do this, I make sure to put a comment way out on the right.

     void process_bytes(unsigned char * p)
         for (;;)
            switch (loop_state)
            case specific_case:
                switch (input_symbol)
                case end_symbol:
                    return;                      //----- note inner return
Before folks jump on me for having nested switch statements or "complex loops" in the first place, let me point out that when I write this type of code it's usually because I'm processing a data format defined by somebody else.

In a question about this on programmers.se, a slightly different history of the "Single Entry, Single Return" mantra is presented: http://programmers.stackexchange.com/a/118793/4025 Essentially, it is argued that the practice that is warned about is to return to different places from the same function, not from different places within the function.

On a separate note, my take is that multiple returns are necessary to write readable understandable code quite often. Guard statements (either handling normal simple boundary cases, or throwing exceptions) at the beginning simplify logic and gives a clean reading of the code.

Only a sith etc etc. Like just about every other blanket statement about programming, this also is sometimes true and sometimes not.

I find that multiple return statements in a function are more often a symptom of ugly code instead of the reason.

I agree with the other commenters, but I'll say that I understand the original intent of the rule was to avoid confusing logic, such as:

if (x): do(thing1) y := do(thing2) if (y): do(thing3) return 0 else: return -1 else: do(thing4) return 0


As you can see, such logic could quickly become hard to test and reason about. Does a single return help all that much? Not in and of itself, but it does tend to make writing such code a bit more painful, leading to better designs. However, guard clauses are a superior design in general.

I still avoid multiple returns in my main logic when side effects are involved, at least when I can.

I used to follow a bunch of best practices like this, that I now often find to be of too little benefit. If the function is small, multiple return statements won't significantly affect readability and it's simpler to code.

>> If the function is small <<

And if the function is not small?

Then extract functions until it's small, a.k.a. refactoring.

...and if you end up needing to pass 20 variables via reference for state?

You have a group of parameters that naturally go together. ...when you have a bunch of methods that call each other, all of which have a clump of parameters that need this refactoring. In this case you don't want to apply Introduce Parameter Object because it would lead to lots of new objects

There are times when this transformation yields simpler code, there are times when it makes things more complex, and there are a lot of cases in between where it's a judgment call.

Then I might set a variable and return that. The goal is to make the function readily understood.

Having a single return point is a pretty good idea in C. In other programming languages, probably not.

Also not handling JSON null values: https://code.google.com/p/go/issues/detail?id=2540

I'm pretty sure that was fixed in both previous betas and tip for some time before that.

OT: I had not seen that there was a GCC compiler frontend for Go. Does anyone have any experience with that? Do you lose lots of Go specific tooling?

> Do you lose lots of Go specific tooling?

No, gccgo has been developed as a first-class compiler; the intent has always been to separate Go-the-language from Go-the-implementation, to prevent a single implementation from becoming the de facto standard over the language specification - a problem which we've seen happen in many other languages.

You can essentially use gccgo as a drop-in replacement for gc; it's only an extra command-line flag you add to the compilation to specify the compiler.

gccgo is actually likelier to be faster than gc for most computationally bound code, since it piggybacks off of the optimizations that gcc has incorporated over the last couple of decades. However, gc may be preferred if you're relying heavily on goroutines (ie, in the hundreds/thousands), since gc is better optimized for that.

Interesting comment in that thread:

"It's worth mentioning the function representation change, since it means closures no longer require runtime code generation (which among other things allows the heap to be marked non-executable on supported systems)."

Anyone know what that means?

Marking the heap's region of memory as non-executable improves security. http://en.wikipedia.org/wiki/Executable_space_protection

Sorry, I wasn't clearer, but I have heard of NX. The part I was wondering about was how the function representation changed.


Edit: dylanvee was faster, the NX-bit is the underlying hardware feature

The gory details are here: https://docs.google.com/document/d/1bMwCey-gmqZVTpRax-ESeVuZ...

TL;DR: functions are now represented as (function pointer, pointer to a context structure), whereas before they were represented as a single function pointer.

Right now it's hard to do numerical analysis on large problems using Go. To fix that, I'd be really excited to see bindings for MPI or something like it. In particular, go doesn't have a concept of an Allreduce or an AllToAll communication. I know that one can do that by calling the C MPI bindings from go, but it would be cool if there were a natural way to do it using go's parallelism and concurrency patterns.

It would also be nice to have a first-class array object - from what I understand, current support for multidimensional arrays is very C-like in that you're really dealing with pointer arrays.

Why not write your own, better, MPI with Go? That's exactly the kind of thing it is designed for.

I agree about the numerics. I'm tempted to write an very bare-bones implementation of Mathematica's kernel in Go, but I'll probably wait until people have done more numerics work in Go.

I haven't made the jump to using any of the 1.1 betas or this RC but the reported 30-40% performance bumps look nice. I'm really still waiting for an easy way to get gccgo working on os x though.

And I'm glad that they're bumping up the heap size to system-dependent values on 64bit systems. The fixed heap size of 16 GB was a really unfortunate constraint (even if it could be changed with a little hacking around).

Gccgo doesn't work on OS X (or any other non-ELF) platform, and it's useless on anything but Linux anyway as segmented stacks are supported only with the gold linker, which is not ported to non-Linux platforms.

Go noob here

Can actual products (Web apps in my case) be built with Go?

You can use Go with Google AppEngine [1], though it's experimental. AppEngine also supports Python and Java. It's a nice environment to experiment with Go, since you'll get quite a bit of free bandwidth / storage [2] to start out with:

> "Not only is creating an App Engine application easy, it's free! You can create an account and publish an application that people can use right away at no charge, and with no obligation. An application on a free account can use up to 1 GB of storage and up to 5 million page views a month. When you are ready for more, you can enable billing, set a maximum daily budget, and allocate your budget for each resource according to your needs."

[1]: https://developers.google.com/appengine

[2]: https://developers.google.com/appengine/docs/whatisgoogleapp...

Sure. Go doesn't yet have any mature web frameworks like django for python but quite a bit of what is needed to build web apps is already part of Golang. (Packages net/http, html/template etc.)

If I remember correctly Google switched their download service to using Go and there was a post here not long ago claiming they went from a lot of servers to merely 2 by switching to Golang.

See http://golang.org/doc/articles/wiki/ for an example.

The Gorilla toolkit [http://www.gorillatoolkit.org/] adds several more useful libraries for writing web apps.

Is Go really that efficient? I heard similar claims from somewhere else, and this has been a motivation for me to learn and then build something in Go.

And what might be the reason for that? Speed? Parallel processing?

As someone who writes Go code professionally, I find Go code to be really easy and quick to write, with fewer bugs/LOC than any other language I've used. Both compilation and execution speed are really fast, so you have both quick edit-compile-test turnarounds similar to a scripting language while having the execution speed of native code. Not quite as fast as C and C++, but getting there.

Writing concurrent code is incredibly easy with the primitives Go provides: goroutines (think of them like green threads multiplexed to 1 to very few OS threads) and channels (similar to pipes). No more faffing around with details of thread creation and teardown or an unreadable mess of callbacks (like you would have in system with event loops). So, if you've written concurrent software (like network servers) before, check it out, you will enjoy it.

> Both compilation and execution speed are really fast, so you have both quick edit-compile-test turnarounds similar to a scripting language while having the execution speed of native code.

It always makes me smile when C and C++ young developers re-discover the compilation times we old timers already enjoyed with Modula-2 and Extended Pascal compilers in the mid-80's.

I learned programming with Turbo Pascal, so I knew before that compilers could be fast and produce efficient code. ;-) If only innovation in Pascal hadn't stopped, we might be using it more than we currently do.

It did not stop.

Ada, Modula-2, Modula-3, Delphi, Oberon, Oberon-2, Component Pascal, Active Oberon, Zonnon, ...

The industry just decided to look into another direction and now with buffer exploits everywhere, it is rediscovering that you can have strong typing with compilers that produce native code.

Now were we talking about Pascal or everything that vaguely looks like a Wirth language? Delphi, alright, Ada, okay, but the others? Meh. None of them had anything fundamentally innovative or were just too obscure from the beginning.

> Now were we talking about Pascal or everything that vaguely looks like a Wirth language? Delphi, alright, Ada, okay, but the others?

I explicitly mentioned Modula-2 on my previous post and most languages on the list are actually done by Wirth or with his input.

> None of them had anything fundamentally innovative or were just too obscure from the beginning.

Well, I consider systems programming languages with GC pretty innovative, given the way Native Oberon and Bluebottle were used in Zurich's Technical University. Even if the languages are pretty basic when compared with Ada or Delphi.

Go's method syntax is actually based in Oberon-2.

See also: "Oberon Influences, Conspiration Theories" :) http://c2.com/cgi/wiki?OberonOperatingSystem

Also, Robert Griesemer studied at ETH Zurich and was a disciple of Wirth, working on Oberon: http://books.google.de/books?id=6kHs4s-79bkC&pg=PA257


As a language geek, I tend to collect such information. :)

"...its successors Modula and Oberon are much more mature and refined designs than Pascal. They form a family, and each descendant profited from experiences with its ancestors." Niklaus Wirth


Modula and Oberon do not vaguely look like Wirth lanugages, they are Wirth languages. And unlike Pascal, which was designed mainly for educational purposes, Modula and Oberon were designed for real world usage.

> Modula and Oberon do not vaguely look like Wirth lanugages, they are Wirth languages.

I wasn't claiming otherwise, but rather referring to Ada.

> And unlike Pascal, which was designed mainly for educational purposes, Modula and Oberon were designed for real world usage.

Guess which one caught on for real world usage and which ones mainly stayed in academia. Exactly.

> Guess which one caught on for real world usage and which ones mainly stayed in academia. Exactly.

Selling Oberon compilers for embedded systems since the late 90's


z/OS is coded in a mix of Modula-2, PL/I and Assembly. Newer parts of the system are nowadays written in C++.

The problem with any systems programming language is that it needs to be forced into developers by an OS vendor, otherwise very few will use it as such.

This is sadly what happened with those languages.

While I don't have any first hand experience with Go's performance - it is not that hard to believe it will be fast enough - it is a compiled language, so no interpreter / JIT overhead like Ruby / NodeJS. Plus they try to be closer to C where it makes sense (it has pointers but no pointer math for e.g.).

The synthetic language benchmarks also put it very close to gcc -O2 C performance. About the only things that need to be improved further (so I've heard) in Go are the GC, coroutine/thread scheduler and crypto performance. They are already better in 1.1 release but Crypto is still not close to OpenSSL performance.

Crypto performance improvements, briefly benched and blogged by jgc: http://blog.jgc.org/2013/04/update-on-go-11-crypto-performan...

> it is a compiled language,

Correction: It is a programming language with two compilers available as the main implementation, but there are also interpreters being developed.

Not a definitive list, but some links to companies using Go in production:


Yes. I am using it myself, and am about to deploy a production app. All Go.

Out of curiosity, what are you doing for error reporting?

I'm thinking something along the lines of honeybadger/hoptoad/etc -- automatic notification if the program crashes/deadlocks/what-have-you.

Im using my own library at the moment. Might release it as open source in the near future. Just waiting on some language features to stabilize.

http://microco.sm/ is all built in Go, as far as I know. buro9 loves it, and I bet deployment is a breeze.

We built the backend of www.healthcorpus.com in Go. It was great.

What about the 8 remaining issues http://swtch.com/~rsc/go11.html will they be part of the Go 1.1 final?

gccgo issues: maybe, since gccgo is not included in the Go package

other issues: probably not, since these are new issues that are being worked on

I guess this website is going to have to change soon: http://isgo1point1outyet.com/

Why is that a website? I just can't believe someone bought that. And why didn't he buy

1outyet.com and make a subdomain, so it could be

isgo1.1outyet.com ?

> Why is that a website? I just can't believe someone bought that.

It was a demo in a talk by @enneff at Railsberry 2013: https://github.com/nf/go11

The site should update automatically once the repository is tagged.

This was an example of a website built with Go for a talk given by Andrew Gerrand recently.

That makes sense :)

Technically it's still correct. It's just a release candidate.


I kinda figured they wouldn't leave the polling time at 5 seconds.

Why not? I want it to flip over to "YES!" as soon as the release is tagged.

Solaris support would be nice...

I believe gccgo already works on Solaris. http://golang.org/doc/install/gccgo

It would, I started working on this a few days ago.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact