
Introduction to Go 1.1 - sferik
https://go.googlecode.com/hg/doc/go1.1.html
======
acqq
Two most interesting news for me are:

> The garbage collector is also more precise, which costs a small amount of
> CPU time but can reduce the size of the heap significantly, especially on
> 32-bit architectures.

Previously, 32-bit version was almost unusable is some cases when garbage
collector kept the allocations for which he falsely believed to be used even
if they were not. I'd like to know if this means that the problem is finally
solved.

> Previous Go implementations made int and uint 32 bits on all systems. Both
> the gc and gccgo implementations now make int and uint 64 bits on 64-bit
> platforms such as AMD64/x86-64.

This is a C-like can of worms, forcing you to test on more platforms just to
be sure that you have portable code. I believed there's no reason to do such
things with primitive data types in the new languages of 21st century (compare
with
[http://docs.oracle.com/javase/tutorial/java/nutsandbolts/dat...](http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html)).
I'd like to see any discussion about such in my opinion quite strange
decision.

~~~
fulafel
On the go-nuts list people have been reporting large slowdowns with the new
precise GC, and someone even backported the old GC to Go 1.1.

~~~
Raphael_Amiard
> with the new precise GC

The new GC is still not a precise GC, it's still conservative and non-tracing,
just "more precise" (probably means they refined their heuristics) so you
should probably not call it a precise GC.

~~~
asb
I thought Go 1.1 featured precise GC on the heap and conservative on the
stack. Is that incorrect?

------
jgrahamc
Worth also knowing that the Go team has made significant improvements in
crypto speed:

[https://code.google.com/p/go/source/detail?r=eb2c99e6ec17db2...](https://code.google.com/p/go/source/detail?r=eb2c99e6ec17db272ef3a20fa416f3fe86ba9f29)

[https://code.google.com/p/go/source/detail?r=e7304334527349c...](https://code.google.com/p/go/source/detail?r=e7304334527349c395f150e109296dfc0f81d271)

[https://code.google.com/p/go/source/detail?r=e5620fd3ba5fb1d...](https://code.google.com/p/go/source/detail?r=e5620fd3ba5fb1d5fb083dbba864ca66d84fa859)

Fast approaching OpenSSL speeds.

Also, the integration of the network poller into the main scheduler is likely
to make a big difference to Go programs that make heavy use of the network.

~~~
SagelyGuru
Good work! While speedups of go of the order of 30% in general are a good
thing, they do point at the relative immaturity of the language. You would not
be able to get these kinds of seedups in gcc, where tings presumably already
are close to optimal.

Another worry I have is the 'backwards compatibility'. It sounds good but in
the long run creates its own problems. I think Java and even Python run into
major problems with this.

~~~
melling
Even though they are somewhat contrived benchmarks, I still like to refer to
the "Language Shootout" to get some idea of the quality of code that the Go
compiler is producing. I would expect the numbers should be better than Scala,
for example.

[http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?t...](http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?test=all&lang=scala&lang2=go)

~~~
gillianseed
Yes I too find the Language Shootout a very interesting benchmark (while
knowing full and well it's shortcomings) and it shows (imo) that Go has alot
of work to do in the optimization department, something one can confirm by
compiling cpu intensive go code using gccgo which will often yield great
performance improvements.

Not surprising though, as GCC's optimization backend is very strong and has
been developed for a long time, meanwhile Go's compiler toolchain is very
young.

As for Scala vs Go, Scala compiles into Java bytecode and then makes use of
Java's excellent and very mature JIT compiler so I'm not surprised Go loses
out to it for the same reasons mentioned above.

Also note that the Go version used on the 'Language Shootout' is the 1.03
version, not the upcoming 1.1 version, it will be interesting to see what
improvements in performance have been made.

AFAIK the 32-bit version of Go (which is what you compared with in your link)
has not been given as much love in the optimization department as the 64-bit
version, which I guess would do slightly better atleast.

~~~
onan_barbarian
Go's compiler toolchain is not "very young". Go's compiler toolchain is
derived from the Plan 9 C compiler and is a decade older than modern gcc
(4.x), which has substantially rewritten its optimization framework relatively
recently with "GENERIC" and "GIMPLE". There are no equivalent changes in the
go backend.

The reason the code quality (in optimization terms) of Go's compiler isn't
that great is that the Plan 9 C compiler was always designed for fast
compiles. This is a matter of design.

~~~
Scaevolus
The Go compiler converts its AST to assembly directly. There's a peephole
optimizer and certain AST patterns are specially handled, but overall the
compiler is extremely simple.

By comparison, LLVM/GCC passes code through multiple internal representations
and performs many complex optimizations on them before emitting assembly.

------
mseepgood
It's not released yet, this is just a preliminary document. There are still
open issues for Go 1.1: <http://swtch.com/~rsc/go11.html> And I guess there
will be a RC first.

~~~
laumars
Is there any ETA on the release or is it just a case of "when it's finished"?

I'm about to start a largish project in Go and I'm thinking I'd rather start
in Go 1.1 rather than worry about upgrading and testing the project mid-
development.

~~~
dsymonds
It's "when it's finished", but you can expect it in the near future.

Go 1.1 is strongly backward compatible. You can start working on a large
project right now with Go 1.0.3 and it'll work just as correctly under Go 1.1.

~~~
laumars
It's the term "strongly" that concerns me as that's not really a guarantee.
However I think I will take your advice and start development on it regardless
(if I've written it correctly, then any breaking changes will be confined to
their respective module so fixing them shouldn't be much of an ordeal).

Thanks for the feedback :)

~~~
enneff
It is a guarantee. I promise, you won't have to change any of your code when
updating from 1.0 to 1.1.

~~~
ithkuil
True. Except if you used ListenUnixgram returned and couldn't use type
inference for storing it's return value. Go1.0 returns UDPConn, which will be
renamed as UnixConn.

This is a clear API bug, and it's the only exception to the backward
compatibility rule.

------
melvinmt
For those of us who are wondering if 1.1 is stable enough to upgrade: Google
is already using Go 1.1 in production.

And the company is betting big time on Go, more than 50% of their codebase
will be (re)written in Go in a couple of years.

Source: core dev on the golang team.

~~~
GhotiFish
>And the company is betting big time on Go, more than 50% of their codebase
will be (re)written in Go in a couple of years.

!!! wow !!!

~~~
mseepgood
Don't believe anything without a credible source citation. Not everything
someone writes on the internet is true.

------
etfb
I grew up with Pascal and its derivatives, so I have a strong sense of the
aesthetic of programming languages. I can't take Go seriously, or bring myself
to use it, because it appears to have no aesthetic sense at all. It's ugly!
Strings of keywords and punctuation, no rhythm to it, just a lot of mess. Like
a cross between C and Prolog, perhaps, with a smattering of Python but only
the ugly bits. I really don't like it.

Now, if you want to see a recent language with a bit of style to it -- and
bear in mind I know nothing of how it is to use in practice, so I'm basing my
opinions entirely on the look of it -- I think Rust is one of the best of the
pack. So much smoother.

TL;DR (for any Redditors who may have stumbled in on their way to the Ron Paul
fansite): languages have a flavour, and Go's flavour is "mess".

~~~
lobster_johnson
Sure, Go is ugly. I had the same reaction as you. But it's so damn
_functional_. There are plenty of warts, syntax and design-wise, but once you
get over them, it's actually a decent language.

Compared to Erlang, for example, it's positively beautiful. I like Erlang's
execution model, but the unnecessarily gnarly syntax and the antique
approaches to some things (no real strings, no Unicode, tuples that depend on
positional values instead of named attributes, horrible stack traces, tail-
call recursion as a looping construct, etc.) has made me stay away.

Go is a decent alternative to both C/C++ and Ruby for me these days. There are
plenty of things I don't like about Go, but it's easy to forgive many of them
when the language makes you so productive (at least for the things I need it
for; it's pretty bad at web app development and completely useless for GUI
development), so I do. Sometimes you just want something that works, with
minimal amounts of pain. C++ works, but is always painful to me because it
feels like the language is working against me, not with me.

I love the simple syntax in Go; the no-nonsense approach to object-
orientation, the way the typing system allows adding methods to arbitrary
types, which may have been inspired by Haskell's typeclasses. I love having
native Unicode strings, first-class functions, a simple package system, built-
in concurrency and decent networking in the standard libs. I love the built-in
documentation generator (although it's too simplistic at this point; why not
Markdown?).

Things I don't like:

* Capitalized names for visibility.

* nils in the language instead of Haskell-style Maybe.

* nil cannot be used in place of false.

* The "non-nil being nil" insanity (an interface value can be nil, but it can also be non-nil and contain a nil value).

* Functions return errors instead of Haskell-style Either monads, or Erlang-style matching. (I see the problem with exceptions, but Go's solution is worse and leads to very awkward code that has to branch a lot to catch errors.)

* The fact that "go" is a language keyword; compare with Rust's very elegant task spawning, which is part of the standard library rather than being built into the language, and yet relies on first-class language constructs to integrate perfectly.

* That "range" is not extensible (that I know; it only works on maps, arrays, slices and channels).

* The whole GOPATH thing, pretty much requiring that you maintain a local tree with symlinks.

* That "import" when used with Git cannot be pinned to a specific revision (so if you rely on a third-party package, it could break your app at any time if HEAD changes; it's ridiculous, and counter-productive, to have to fork third-party repos just because of this).

* That packages have only a single nesting level. I don't mean that I should be able to create a package "x.y.z", I mean that given a package "x", all the files _have to be in the same folder_. I can't organize the tree in multiple subfolders.

* That statements are not expressions.

* That semicolon elimination necessitates some insane syntax at places (like having to end a line with ",").

* The lack of structural tagging in the language (similar to Python and Java annotations), and the fact that they _did_ add tagging, but only for struct fields, and the tags can only be strings.

Right now, Go is, in my opinion, mostly an inelegant, weirdly-shaped thing
that fits into the weird hole not filled by C and C++, a stepping stone to the
next big language. I don't love the language, but I love how I can do the same
things I was planning to use C++ or Ruby for; as a C++ replacement I gain
productivity and simplicity; as a Ruby replacement I gain performance.

Go is, if anything, a very pragmatic compromise that favours simplicity over
complexity at the expense of structural and syntactical elegance. It looks a
lot like Java around 1.3-1.4, before the enterprise stuff derailed its focus.
My concern is that Go has no particular philosophy as a foundation; it bakes
in some current concerns (concurrency, networking, ease of use) and made
itself a very useful hammer for hammering today's nails, but may not be as
easy to extend to work with tomorrow's screws.

PS. I, too, come from Pascal — specifically, Borland's Object Pascal. While I
have fond memories of that language, I'm not sure elegance was one of its main
traits.

~~~
grey-area
Nice list of frustrations with Go, though I wouldn't call it inelegant - the
syntax may be homely, but the simplicity gives it a certain minimalist
elegance.

Re nils, yes this still feels pretty ad-hoc, I wonder if they'd do anything
differently there now if they could? Probably there is no easy way out now
they're past 1.0.

It would be nice to see an extensible range and I've had this thought myself
(as have others), there has been some discussion on the list, and they're not
sure:
[https://groups.google.com/forum/?fromgroups=#!topic/golang-n...](https://groups.google.com/forum/?fromgroups=#!topic/golang-
nuts/n1izOhyaO8I)

but then you can also turn it around and iterate by passing in a function to
the structure to iterate:

    
    
        func (self *DataStructure) Iterate(fun IteratorFunc)
    

Re GOPATH, what do you mean here?

Re packages having one level - I actually love this - if you want to nest
things in folders, Go is trying to tell you to split it into sub-packages (in
that opinionated manner it sometimes adopts). Of course this might feel
restrictive, but if a package has lots of files I'd rather it was split up and
organised into packages with clear boundaries between them rather than into an
interconnected set of files/folders which only makes sense to the author of
the code because it's all in one package with no boundaries. This restriction
means we will see apps composed of packages by default rather than all in one
namespace because it's easier.

Re the versioning, they originally resisted versioning the language, but
eventually gave up on that idea and used versions, so I suspect for packages
they'll eventually recognise the need for this when the ecosystem of packages
grows sufficiently. Versions do introduce problems of their own (dependency
hell), but IMHO being able to reliably import snapshots of a package (tag,
branch or version) will be essential long term without having to fork a repo
and rename it just to do so. If go becomes popular you can expect this to be
addressed by third parties if the go team doesn't bother.

Re the tagging, in a way I'm pleased that they avoid adding lots of features
like this, as this is the stort of thing I hate even in the limited version
they have (on struct fields) - it's untidy and gets misused and overridden to
try to extend the language (see sql libraries in Go tagging fields with all
sorts of their own meta-data).

Re commas on lists, this bothers me less than having to put semicolon & LF on
every line, I think it's a fair trade.

Re philosophy, I can't speak for the Go authors, but I've found radical
simplicity to be its overriding principle, which explains some of the
limitations which annoy you above and the ditching of a lot of OO baggage
which languages nowadays are expected to carry whether they want it or not.
I've found the simplicity worth the trade for some small frustrations.

~~~
lobster_johnson
> _Probably there is no easy way out now they're past 1.0._

Yeah, any attempt to change this would have to introduce something like a per-
package pragma flag to disallow nils. I feel they have painted themselves into
a corner there.

> Re GOPATH, what do you mean here?

I mean that the "go" suite of build tools (build, get, install) rely on a
specific convention: Your GOPATH is supposed to point to folders each
containing bin, pkg and src subfolders with all packages.

Let's say I am developing a package A that uses my own package B. Both on
Github. You're supposed to then create: $GOPATH/src/A (which contains A's
files as immediate children) and $GOPATH/src/B (ditto for B).

The manuals I believe suggest that you have something ~/projects/go, under
which you create bin/src/pkg and put your work. But I don't organize my home
folder like that and I find it presumptuous that the tools assume that I do,
so I have ~/.go, which contains bin/src/pkg, and I symlink stuff into
~/.go/src. Acceptable if annoying.

> _Re packages having one level - I actually love this_

I want to use folders as an organizing unit, the way they are intended. It
makes sense even for small packages.

Generic example (I have something very similar to this, but I don't want to go
into details): I have a framework. There is a bunch of core interfaces, and
then a bunch of implementations. Let's say that my interfaces are called Queue
(with concrete implementations FooQueue, BarQueue, BazQueue etc.), Block
(again, a bunch of concrete impls) and Stream (same). For my brain not go
insane every time I look at my source tree, I want to have the core interfaces
in the root, and then subfolders queues/, blocks/, streams/.

That's not much to ask. Putting these things in separate packages with
dependencies on the original package makes no sense, in particular because the
original package has internal dependencies. For example, the framework has a
configuration builder that connects queues, blocks and streams, and it knows
certain things about the implementations and how they interconnect. Putting
_that_ into yet another package is just insanity.

It's _one_ package. And it needs subfolders. (Especially with how Go mandates
that tests be in the same folder as the unit being tested!)

I have encountered lots of C projects that pile all the files into a single
root folder, instead of having a src/ folder with appropiate subfolders by
category. I think that's what the Go guys are mimicking. I hate it.

I mean, just the fact that a package needs the Go files to be in the root is
pretty bad. I may have a docs/ there, a readme file, an examples/ folder, a
locales/ folder -- why must the source files go in the root? They're not that
special. I know you can import directly from a src, but the convention is
'import "github.com/foo/bar"', not "github.com/foo/bar/src"'.

As I said, I don't want Java-style nested package names. All I want is to be
able to organize the _files_. After all, the location of the files doesn't
matter to the compiler. (Java used to be like this at one point, too, unsure
if it's still the case: What you put in the package decl didn't need to match
up with the structure of your folders.)

> _Re the versioning, they originally resisted versioning the language_ ...

Hope this will be solved. I suggest something similar to Ruby's Bundler --
don't make it part of the language, but part of the development environment.

> ... _as this is the stort of thing I hate even in the limited version they
> have_

Agree, I would rather see no tagging at all, rather than the flimsy stuff they
have today.

~~~
grey-area
Re package paths, I guess the tradeoff here is for go get/install etc to be
able to include or fetch dependencies, it needs to know where to put them and
install them, and this was the simplest system they could think of which
worked. Nowadays I think it's ~/.go/src/github.com/user/pkg rather than just
~/.go/src/pkg It would be nice to be able to specify goinstall and golib paths
or something though, and I guess your symlinked idea is a good workaround. I
have just gone with the flow on this one, and don't mind as don't have gopath
at the root of ~/.

Re folders versus sub-packages, yes I take your point - again there are some
conventions here they've tried to impose to make it simpler for the package
tools, but those could get in the way. I also hate projects that have all the
files piled into one dir, so I prefer to break into sub-modules where possible
and make the organisation clear. The other advantage of this is that people
could use just your sub-module if they want.

I've developed a few packages which I wanted subfolders for, but those
subfolders naturally became sub-packages without much wrangling - seeing too
many files on one level forced me to organise them and create more packages.
One case for example mirrors your usage - a top level pkg, with interfaces and
concrete types for specific databases, which lived in their own folder
'adapters' this became a sub-package, which contained the interface and
concrete implementations, and it being a package forced me to remove any
knowledge of the enclosing package and any circular refs - the top level pkg
knows about the adapters, but adapters don't know about the top level. I felt
like this was making the code better, not worse, by removing internal
dependencies, but YMMV, and I can see how it would be annoying. I'm willing to
forgive conventions like this if the tradeoff feels worth it (in this case I
really like the go gettable packages, and to have that am willing to live with
restrictions like some code must be in root, folder=subpackage, etc.).

~~~
lobster_johnson
With the subpackage, did you actually put the stuff in subfolders of the
original package, or did they have two different top-level folder names? In
other words, was it

    
    
        import (
          "github.com/foo/x-core"
          "github.com/foo/x-adapters"
        )
    

or

    
    
        import (
          "github.com/foo/x"
          "github.com/foo/x/adapters"
        )
    

If the second, then do you need to do something special with regards to
compilation or anything else?

One thing that bugs me with the second approach is that the package names then
have to be different. If the original core package is called "x", and you have
an "adapters" package, would the package name be something like "x_adapters"?
Just "adapters" seems a bit presumptuous (it's very generic).

~~~
grey-area
Because packages are namespaced by user, and it's easy to rename imports, I
don't worry about choosing obvious names - that's only a problem if it is
popular and used by more people than me anyway :)

So I'm using obvious names, with the pkg (say core) in one folder, and subpkg
in the subfolder named adapters.

    
    
        "github.com/user/core"
    

this has a few .go files in it to split things up into different concerns, and
a subpackage with adapters (again with a few files), imported with:

    
    
        "github.com/user/core/adapters"
    

and using adapters.XXX in the core code. You can also do this:

    
    
        import 
        (
         mypkg "github.com/user/core/adapters"
        )
    

and use mypkg.XXX to refer to the subpackage, so you're not tied to the folder
names for package import names, that's just the convention, and if someone
needs to rewrite the name on import, they can do so.

~~~
lobster_johnson
True, although it gets annoying having to "rename" a package like this. If
it's a package you use _everywhere_ in your project, it gets old fast;
canonical, author-defined names are always the best option.

~~~
grey-area
Yes, it could be a pain unless it was just imported in one or two files. In
practice though I've not found collisions in package names to be an issue,
simply because of the layering of packages - the top level package is really
the only one which knows about these adapters and imports them - it then
presents an interface to the world which is under its namespace, so the only
worry for uniqueness is the top level pkg name really.

------
davidw
I'm interested in tracking Go as a replacement for Erlang. Some things that
should probably happen:

* GC works on 32 bit systems.

* Scheduler improvements so that things are a bit more robust ( <http://code.google.com/p/go/issues/detail?id=543> )

* A fairly solid library implementing supervision trees or something like it.

* Probably some other things, but those strike me as the big ones. The important thing is for the system to be able to run for months at a time.

I think they'll get there eventually. Maybe not 100%, but 'good enough' sooner
or later.

~~~
trailfox
Akka is also a viable Erlang alternative.

~~~
waffle_ss
Looks like a nice library but I don't think it's a serious contender to
replace Erlang because the JVM just isn't made for the level of concurrency
that Erlang's VM is. Off the top of my head as an Erlang newbie:

    
    
      * You're still using Java threads, so the potential to access [shared memory][1] is there (i.e. "shared nothing" can't be guaranteed).
      * You're still subject to global GC pauses, whereas the Erlang VM has per-process GC.
    

[1]: <http://doc.akka.io/docs/akka/snapshot/general/jmm.html>

~~~
trailfox
Recent benchmarks and related responses resulted in Akka out-performing Erlang
(read the full post):
[https://plus.google.com/112820434312193778084/posts/HdKFx4VQ...](https://plus.google.com/112820434312193778084/posts/HdKFx4VQtJj)

Shared memory when running on the same machine is actually more efficient
since there is no need to copy immutable objects.

If the sophisticated low-latency GC options available on the JVM are not
sufficient for you feel free to fire up more JVM instances on the same machine
or on other machines.

~~~
waffle_ss
I assume you mean the comment that links here, with Akka getting 2.1M
messages/sec? [http://uberblo.gs/2011/12/scala-akka-and-erlang-actor-
benchm...](http://uberblo.gs/2011/12/scala-akka-and-erlang-actor-benchmarks)

If you follow the comments there, someone improved the Erlang benchmark to 3M
messages/sec, beating Akka once again: [http://musings-of-an-erlang-
priest.blogspot.dk/2012/07/i-onl...](http://musings-of-an-erlang-
priest.blogspot.dk/2012/07/i-only-trust-benchmarks-i-have-rigged.html)

> _Shared memory when running on the same machine is actually more efficient
> since there is no need to copy immutable objects._

Many Erlang processes fit onto an OS-level thread or process, so passing
messages is very fast ("copying" shouldn't be equated with context switching
OS threads).

> _If the sophisticated low-latency GC options available on the JVM are not
> sufficient for you feel free to fire up more JVM instances on the same
> machine or on other machines._

Akka is a library and can't make guarantees about how the JVM will perform
garbage collection of actors while Erlang has it built into its VM. No amount
of creating new JVM instances will change that.

~~~
trailfox
_> I assume you mean the comment that links here_

You assume incorrectly. I'm referring to the version where Scala was 44%
faster than Akka:
[https://plus.google.com/112820434312193778084/posts/HdKFx4VQ...](https://plus.google.com/112820434312193778084/posts/HdKFx4VQtJj)

In any event the application logic in Java will likely outperform erlang. See:
[http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...](http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?test=all&lang=hipe&lang2=java)
where Java outperforms Erlang by 3-30x in most cases and uses significantly
less memory in most cases.

> _Many Erlang processes fit onto an OS-level thread or process, so passing
> messages is very fast ("copying" shouldn't be equated with context switching
> OS threads)._

Many Akka actors fit into a single process and there are many actors per OS
level thread, so this isn't really a useful point for comparison.

> _Akka is a library and can't make guarantees about how the JVM will perform
> garbage collection of actors while Erlang has it built into its VM. No
> amount of creating new JVM instances will change that._

The JVM does the GC, not the library. Is 100 microseconds not short enough for
your application? [http://mechanical-sympathy.blogspot.com/2012/03/fun-with-
my-...](http://mechanical-sympathy.blogspot.com/2012/03/fun-with-my-channels-
nirvana-and-azul.html)

~~~
waffle_ss
Your 44% link is after crippling the benchmark by de-parallelizing it on top
of using a poorly written Erlang implementation. I suggest you read the actual
blog post that your Google+ post links to, including its comment section and
the links posted therein (which is how I arrived at my links):
[http://www.krazykoding.com/2011/07/scala-actor-v-erlang-
gens...](http://www.krazykoding.com/2011/07/scala-actor-v-erlang-
genserver.html)

> _Is 100 microseconds not short enough for your application?_

GC time is not a useful benchmark by itself. The important thing with the JVM
is that you can't predictably reason about when the global GC will happen
(global GC pause). Your JVM actors aren't shielded from these global GC events
so building a soft real-time system is less practical under the JVM than the
Erlang VM IMHO.

~~~
trailfox
The benchmark doesn't "cripple" by de-parallelizing. Execution is already in
parallel, it just uses regular collections as the concurrent ones weren't
worth the performance overhead and in the end Akka ran 44% faster than Erlang.
In the comments you refer to having 50% Erlang is roughly the same as 44%, so
I don't thing there is much performance difference even though Java/Scala is
faster than Erlang in general.

Regarding soft realtime requirements Erlang may be a better choice, but I
doubt there are too many use cases where it will make much difference.

------
GhotiFish
Whenever I look through golangs specs, I always get stuck on the same
question.

Why are the methods we define restricted to the types we define? I'm SURE
there is a good reason.

Others have said that it's because if you did allow that kind of thing to
happen, you might get naming collisions in packages. I don't buy this
argument, you could get naming collisions anyway from packages, Go resolves
those issues by aliasing. Go also allows for polymorphic behavior by packaging
the actual type of the method caller with its actual value, so resolving which
method to use isn't any more complicated.

I don't get it, I'm sure there's a good reason! I just hope it's a good enough
reason to throw out the kind of freedom that would allow you.

~~~
Jabbles
You can't add methods to a type in a different package as you might break that
package. By locking the method set of types when a package is compiled you
provide some guarantee of what it does, and it doesn't need to get recompiled
again! This is central to Go's ideals of compiling fast.

Besides, embedding a type is very easy
<http://golang.org/ref/spec#Struct_types>

~~~
GhotiFish
Hmmm. I can see how adding methods to a type in a different package would
require that package to be recompiled, but I don't see how I could break that
package. Unless there's some reflection magic I'm not considering.

I'm reading through the embedded types now. I am new to golang so this one is
lost on me. I thought if you wanted your own methods on a type from another
package, you just aliased it with your own type def.

though it looks like there's some kinda prototyping behavior being described
here?

    
    
        If S contains an anonymous field T, the method sets of S and *S both include
        promoted methods with receiver T. The method set of *S also includes promoted 
        methods with receiver *T.
        
        If S contains an anonymous field *T, the method sets of S and *S both include 
        promoted methods with receiver T or *T.

~~~
Jabbles
For example, if you do a runtime type assertion to see if a variable satisfies
an interface:

    
    
        v, ok := x.(T)
    

<http://golang.org/ref/spec#Type_assertions>

If you "monkey-patch" x to satisfy T in another package, the value of ok may
change.

~~~
GhotiFish
Hmm I can kind of see what you mean, but I don't see it as a big of a problem.

If you make package A depend on package B, package B monkey-patches x with
method Foo so now x is a Fooer

x now satisfies the Fooer interface in package A, well that seems ok. You
imported B after all. In things that don't import B, x doesn't satisfy Fooer.
Is this unexpected behavior? If B depends on C, C's x won't satisfy Fooer
right?

------
thepumpkin1979
Still missing custom generic types and methods :(

~~~
_ak
Experience with Go in the last few years has shown that you don't really need
them. If you think you do, then you're doing it wrong.

Also: <http://research.swtch.com/generic>

~~~
AndrewDucker
And as the various comments there show, .Net does it right, and better than
any of the methods they list.

If I want to create a new wrapper class, the last thing I want to do is either
make it hold an Object, or have to write a new version of it for every type it
can hold.

~~~
grey-area
_If I want to create a new wrapper class, the last thing I want to do is
either make it hold an Object, or have to write a new version of it for every
type it can hold._

Can't you solve this problem with an interface defining the behaviours your
wrapper expects? The interface can then be passed in or held as a ptr without
requiring a specific type, allowing a generic implementation. You can create a
generic iterate or sort function for example which operates on slices of a
given interface. Sorry if I've misunderstood, if so a code example might help
clarify what you mean.

~~~
AndrewDucker
Let's say we have a Reply<T> which packages up the actual reply from a
service, along with the status of the service call, and any additional
warnings, etc.

In C# I can define this once, and then any service can just return a
Reply<string>, Reply<int>, Reply<Address>, etc. The calling method can check
the associated reply status, and then unwrap the data if the call was
successful, displaying any associated messages as needed.

Or, for a different example: If I want to store a long list of Customers in a
dictionary, keyed on String, then in C# I'd create a
Dictionary<string,Customer>, and know that it would always contain the types
I'd expect.

How would I do either of those in Go?

~~~
grey-area
I'm no expert in Go, but I think it can cover most common uses of generics
like those you mention, though you cannot of course replicate all C++ code in
Go without changing paradigms a little. It won't work in exactly the same way,
but it can usually solve the same problems elegantly.

1\. Generic reply (for a good example of this in use see the way errors are
used in go; errors are extensible and can contain info like that in your
example).

<http://blog.golang.org/2011/07/error-handling-and-go.html>

For your example, you could have a reply interface, defining the requirements,
then concrete types which conform: <http://play.golang.org/p/e18n36Ub5u>

There are probably shorter ways to do this, this is just an off the cuff
example, but it's quite possible to have generic reply types which respond to
any methods you want, and can be created easily and extended if necessary. You
can also cast back to the original type.

2\. Just use map[string]Customer - Customer could be an interface (duck
typing) or a type.

Something like this: <http://play.golang.org/p/ZoftbY8mQX>

IMHO using an interface here is more interesting, as it lets you define your
requirements, and forget all about the type system, while letting the compiler
check that any uses will always work as they conform to the interface.

~~~
AndrewDucker
In your examples there, am I not having to create a new type per case? So a
Reply<String> and a Reply<Customer> will require me to create two classes?

~~~
ansible
You'd just want to have your types implement the interface for Reply, and
provide a method with the same signature that clients can use.

Here's what I think you want, written in (as far as I am able to) idiomatic
Go:

<http://play.golang.org/p/KI79ynfRg2>

~~~
AndrewDucker
So every type that can be returned from a service call has to implement Reply?
That's acres of template code being duplicated all over the place - and if I
need to fix a defect in it I need to change it in every place!

That strikes me as terribly unmaintainable.

~~~
grey-area
_So every type that can be returned from a service call has to implement
Reply?_

Yes, if you want it to actually be a Reply, it has to implement Reply.

 _That's acres of template code being duplicated all over the place_

No, if you need to reuse code you can use other techniques like embedding (not
shown in examples).

The features and restrictions differ from many other OO languages but I think
the creators of Go deserve a little more credit than you are giving them -
they do not produce unmaintainable code (see std lib for Go), worked on a
large scale with C++ OOP previously, designed the forking Unix operating
system, and are not unaware of the ramifications of the decisions taken.

------
Titanous
Here's the same document with a stylesheet: <http://tip.golang.org/doc/go1.1>

~~~
thrownaway2424
The post's URL should be replaced with this one ... most of the links in the
posted page are broken.

------
phasevar
Sweet performance improvements! Definitely looking forward to seeing Go 1.1 on
the benchmark game.

~~~
trailfox
Definitely looking forward to seeing if Go has made any progress in catching
up to Java and Scala in algorithm execution times.

------
ubershmekel
> In Go 1.1, an integer division by constant zero is not a legal program, so
> it is a compile-time error

How often do be people accidentally divide by a constant 0? I personally use
that to simulate runtime errors all the time in python. It's shorter than
writing "raise Exception()".

~~~
jsmeaton
I'm not familiar with Go, but I'd imagine something like this:

    
    
      const int SomeNumber = 0;
      // then much later in the program
      return 3 / SomeNumber;
    

An accidental and not immediately obvious divide by constant 0.

~~~
JulianMorrison

      const weasels = 5    
      const lawyers = 5    
      const mustelids = weasels - lawyers
    
      feedWeaselsEach(chickenPortions / mustelids)

~~~
netcraft
would this count as a constant 0 and be caught by the compiler?

~~~
JulianMorrison
Yup. Constants are computed at compile time in Go.

------
przemoc
We have integer madness here and no one is commenting on it. Why making this
int change and why now? I don't like Java, but well-defined integer types
sizes is a good thing. Most of the world uses intXX_t typedefs in C and C++
too.

~~~
laumars
Nobody is commenting because there isn't any integer madness.

If the size of an integer is critical to the execution of your program, then
you should be using _int16_ or one of the many other types that specify the
number of (un)signed bits.

The _int_ type is designed to be platform specific to optimize on execution
speed when the maximum size of the integer isn't an issue.

<http://golang.org/ref/spec#Numeric_types>

~~~
przemoc
See my response to grey-area.

And regarding optimization. Language-defined integer type size is one thing
and CPU register sizes used in calculations is different thing. It's not like
having 32-bit int forces you (or compiler) to use 32-bit registers.

For every int we (i.e. programmers) should be aware of possible min and max
values that can be stored in it. Thus we can easily use fixed-size ints
everywhere, paying attention to signedness too of course. The problem is that
there is high temptation of using _int_ most of the time, because it's easier
to write (it's shorter). For the same reason many C programs use int instead
of unsigned short or uint16_t for instance. Well-defined int size is part of
self-documenting code, as it shows mindfully applied constraints.

~~~
laumars
There's arguments for and against both methods. Back when I first started
programming, memory was a luxury (we're talking 80's personal computers here),
so it made a great deal of difference. These days, however, choosing an
integer type which most closely matches the values stored is largely an
academic process which costs developer time (though obviously there are
exceptions, times when it makes more sense to be mindful and times when it
makes less sense).

My point is this: the problem when talking about "best practices" is there's
some approaches that actually doesn't make a huge deal of practical
difference, so it's more subject to specific circumstances (eg developer time
constraints) or even just personal preferences (eg the arguments between
()\n{\n\t and (){\

Anyhow, in answer to your other post, the int _n_ and uint _n_ types have been
available in Go since day one. So (and in answer to your original post as
well), this _int_ issue you raise isn't an issue unless developers have
written extremely bad code to begin with (ie written counters which they know
will exceed a 32bit integer and not cared enough to write an error handler nor
specifically use an _int64_ type). Go does do a lot more to protect the
developer from shooting his own foot off (more so than C/C++), but if we're
completely honest, if the developer is crap then no amount of hand-holding
from language design will help.

~~~
gngeal
"There's arguments for and against both methods. Back when I first started
programming, memory was a luxury (we're talking 80's personal computers here),
so it made a great deal of difference. These days, however, choosing an
integer type which most closely matches the values stored is largely an
academic process which costs developer time"

Actually, memory is even more of a luxury than ever, since your caches have
limited size and main memory bandwidth is atrocious, compared to the speed of
your execution units. Using 32 bit fields for stuff that safely fits into 16
bits can cost you for that reason alone.

~~~
laumars
Sorry, but that's not the same as a program crashing to run because you've
exceeded 64KB of RAM. Plus your program will spend a great deal of time off
the cache anyway (thanks to the OS overhead and any multitasking taking
place).

I'm all for performance tuning and writing efficient code (like I said, I
learn to program back in the 80s when every bit of data mattered and various
tricks were used to work around the limited power of those systems), but let's
be clear about one thing, memory these days is not " _more of a luxury than
ever_ ". The exact opposite is true.

------
mortdeus
Can you please delete this article until go1.1 is officially released. Just in
case the go developers make any last minute changes. Otherwise this article
may lead to confusion for those who choose to skip out on reading it on
golang.org when go1.1 is released because they read it here.

~~~
mseepgood
It's not an article. It's the HTML page in the Go source code repository that
will be put online on golang.org once Go 1.1 is released.

~~~
laumars
He knows this and arguing the semantics of 'article' is completely missing the
point he was raising.

Personally, I think it's silly to delete this thread, but I appreciated his
warning that this [article / document / HTML page in a source code repository
/ whatever the hell you want to refer to this as] is only a draft document for
a version of Go that's not yet been finalised.

------
joeshaw
"We trust that many of our users' programs will also see improvements just by
updating their Go installation and recompiling."

Does anyone know if it is possible to determine what version of Go a given
binary was compiled with? Perhaps extracting some metadata from an ELF
section?

~~~
dmit
There's no guaranteed way unfortunately. If you expect to need this in the
future, you can add a flag in your program to print runtime.Version(). But for
existing binaries that don't use that function the version will not be
included in the binary.

<http://golang.org/pkg/runtime/#Version>

------
VeejayRampay
I'd more than welcome any instructions and pointers on how to install Golang
1.1 on my computer which has 1.03 installed already. Either to replace my
current version or maybe to install it aside 1.03 for testing. Thanks a lot.

~~~
eridius
I would guess that right now you need to install from source.

<https://code.google.com/p/go/source/checkout>

~~~
VeejayRampay
I see, thanks a lot. And congratulations and thanks to the dozens of
developers who made that release happen.

------
djhworld
I'm excited for this release.

I'm not sure if I understand what "method values" are though

~~~
karma_fountain
Yes, I wondered about that. Is it just that the function

    
    
      func (w *Writer, int n) {
      }
    

is actually a method, like:

    
    
      func (w *Writer) Anon(int n) {
      }

~~~
mseepgood
No, it means that it is now possible to use methods as first-class values as
well, just like functions: <http://tip.golang.org/ref/spec#Method_values>

~~~
karma_fountain
I see. Seems like a nice addition.

