I am a heavy user of Go, I use it in my job and for side projects as well, for years now.
The lack of generics in Go takes away most of the fun from coding after a couple of month. Despite what is being said, interfaces do not lessen the pain.
With a better type system, taking ides from Haskell (ADT, generics done well, ability to implement interfaces for a type outside of its package etc) Go could be the perfect "dirty Haskell" for me, but now it is just a meh language with a half assed implementation.
That being said, it is still a much better choice then a lot of overcomplicated languages out there.
EDIT:
The biggest practical problem with not having generics is the fact you do not have access to list (slice in Go) manipulation functions. This means you have to implement them yourself, on the fly, for the given type when you need it. This does increase the time it takes to implement something, but also can introduce a lot of bugs as well - you can't rely on (almost guaranteedly) bug free implementations of existing functions.
Of course this runs much deeper than just the list type, but on day-to-day basis I feel this to be the biggest pain. But the rabbit hole is deep: we could get into how the multiple returns values are just a really inconvenient (not composable) workaround of not having a generic tuple type...
I haven't gotten that far. It seem lack of generics is something noticed by those who already wrote quite a bit of code. I also come from dynamic languages, Python and Erlang, so I think about types differently (I like them dynamic but strong).
Anyway, I like go because:
* of its fast builds.
* static compilation.
* speed, almost a replacement for C++ in what I do
* concurrency: goroutines, this is great compared to threads in other languages
* nice library -- json, http parsing, many other goodies, batteries included is good in my book
* gc -- nice to have a garbage collector
What I don't like as much:
* concurrency: goroutines are great, but once you've tasted Erlang's model, this one seems lacking. I like thinking in actors and mailboxes rather than in goroutines and channels. Actor+mailbox somehow matches my world view better rather than nameless goroutines.
* no supervision control of goroutines -- I haven't found a way to kill, or watch for death of a goroutine externally. This would make it easy to build a supervision tree and add some fault tolerance features to the language. channel ping plus timeout could mimic it but seems kind of a hack.
* gc stops the world sometimes.
* goroutines share the heap, I wish heap sharing was an option one had to enable and by default it would work like Erlang (or Dart Isolates). As concurrent systems (which supposedly Go is the prime target for) grow it is really fault tolerance that kills them. One goroutines, killing the whole system with thousands of concurrent requests is so C++-ish, I wish it was more Erlang-ish.
For something more Erlang:ish (but better), see the Flow Based Programming implementation by Vladimir Sibirov, GoFlow: https://github.com/trustmaster/goflow .
It lets you give your components names, and the inports of the components are somewhat analogous to Erlang inboxes as far as I can tell.
But then it improves on the Erlang actor/mailbox concept by keeping the "wiring" of the connections between boxes completely separated from their implementation, which lets you change the data flow much more easily (hook in a monitor, logging facility, and extra test mock or whatever ... it is crazily powerful when you start to think about it!).
GoFlow is of course not the only FBP implementation, but a Go implementation makes quite some sense I think.
For supervision: Go's idiom is to handle errors immediately and locally rather than Erlang's die-and-restart approach, but for panics you can use defer and recover to alert the supervisor over a channel. You'll have to write the supervisor yourself, of course.
I've been developing a supervisor library, written by somebody quite familiar with Erlang as I've been programming in that for years. I've been waiting while I pound on it locally to release it, but I suppose it's time; I don't think I've touched the code of the supervisor itself in weeks. I'll see about getting it up on github here in the next couple of days.
It's not as good as Erlang, not by a long shot, but it's still a bit of reassurance, and provides a useful default "restart" behavior out the box. (Naively-written restarts have some bad and easy-to-trigger pathological conditions. Well-written restarts still some bad cases too; crashing software can always crash in more pathological ways than you expected, but it's at least better.)
I'm also developing a library that gives you a "mailbox" like Erlang. I'm porting over a project in Erlang, and while I use native Go idioms where possible, Erlang code ends up with the Erlang semantics deeply embedded in it. (My thought is that a new project in Go should never use this library, but porting Erlang code may be eased by it.) That is still very much in progress; I'll only be starting cross-node clustering support here in the next couple of weeks, and without that there's not a lot of value to it. I'm still feeling my way through the right set of functionality and semantics for that, it's not ready for the public yet.
I tend to agree that simple generics would have made things a bit less annoying working with custom containers (BTW - I think the standard library should have a proper set type, it's silly that there are a zillion set implementations out there with slight variations).
But the things that bother me most about Go are actually different (I'm saying this being a big fan of Go and having written a few projects with it):
1. The GC is very simplistic and prevents building memory intensive services (at least not easily) without suffering long pauses.
2. The lack of any robust IDE. I'm mainly working with Sublime, and used to work with the IntelliJ plugin. Both are decent, but sublime has a half working gdb integration, and IntelliJ lacks one completely.
3. Community maturity. go-get is awesome, and there are tons of libraries for anything - but most are immature and/or not maintained. Usually you have a problem and find 2-3 projects that look promising that already solve it, but then one hasn't been updated for 2 years, the other looks awesome but is 2 months old and incomplete, etc etc. It will get there, I'm sure, but currently there is an inflation of toy projects, and no mechanism to find the prominent projects quickly (I think pypi's keyword based approach helps with this a bit, BTW).
When their map implementation has union and intersect, I'll shut the hell up. In the mean time people who need this need to implement these operations by themselves, or try to figure out which set implementation on github to use.
This is my oppinion of Go as well. Go has done alot of things right, but the lack of generics is off putting.
I would also prefer that the type system could catch null pointer exceptions, but most languages don't give this guarantee, so it's not that big of a deal.
Another thing people gripe about is that exceptions (panic) is almost never used, and that most functions return error codes. I've yet to decide what I think about this issue, but sometimes it does feel like Go went for "NO EXCEPTIONS!" and then figured out they needed exceptions anyway and added panic/recover as an afterthought.
What I do love about Go though is the implicit interfaces, := assignment, forcing if and for to be followed by {}, no inheritance, concurrency and the non-forgiving compiler.
I agree with the panic sentiment. I almost never know when to actually use panic or to just return an error code. My guess is the reason panic is never used is because its "bad go" for a panic to leave your library/package.
Generally speaking, you ought to use error instead of panic. The only time panic is appropriate is when the program really ought to just crash.
For example, I'm writing a small program to update a database. If the DB connection can't be opened, the program can't do anything useful, so it can just exit with an error.
Also have large Go systems in production, and I agree lack of generics is a problem. The biggest problem for me though, is the lack of testability.
As a simple example, the fact that log.Logger isn't an interface means everyone creates a wrapper just for testing purposes. DI is also complicated enough that in some cases, it's just easier to forget about unit tests and stick with integration tests.
That is a problem that interfaces can solve. I agree it's a bizarre oversight that the logging package doesn't define an interface, but it's also one very easily corrected, and since interfaces are implicitly fulfilled, the standard logger trivially meets the interface you define. We've found there's a few other places in the core libraries we expect an interface and there isn't one, but, again, fortunately it is trivially fixable.
I'm actually finding Go to be one of the testable imperative languages I've used. This isn't everything I've developed for it, but this is one of the fundamental tricks: http://www.jerf.org/iri/post/2923
Also remember that global variables are still Bad. If you've got a global Logger, you've already lost. Fortunately, struct embedding seriously mitigates the pain of passing things around and using them. While this will probably break apart into multiple objects as I continue developing, I've taken to having this sort of structure:
type Services struct {
*logging.logger // actually our custom logging package
database.Database // locally defined interface
connections.Tracker // registers connections or whatever
}
I then have my primary objects embed a Services instance, so logging is just "obj.Error(...)". I pair that with a function that can construction a "null" Services, with a logger that doesn't log, a connection registration system that doesn't register, etc. Combined with what I linked, this enables even rather complicated interactions to be feasibly tested in relative isolation. I've also created some basic mock objects for some of the objects that may for instance assert that the Logger is called when expected, etc.
Technically none of this is impossible in other imperative OO languages, but Go has just enough twists on things to make it qualitatively easier, mostly implicit satisfaction of interfaces, which has had consequences beyond what I've expected. Embedding has had surprisingly pleasant consequences too; complicated objects can be safely built up with rich interfaces that do not require you to hardwire them together, or give you excessive access to the internals of the bits.
It would be really interesting to see specific examples for this - can you give a short example in code or pseudo-code of somewhere you really wanted them and didn't have them, and how they would improve things? There is a generic append, but do you mean sorting lists or something else? Apologies if this is obvious, but it would be enlightening for those not familiar with generics from other languages.
If you want to create a []MyString from []string, now you have to loop over that and transform by hand, with generics you could do map(NewMyString, a).
D is complicated for sure, but it doesn't have the issues you mention, and is still fun after years. I would be curious what fatal flaw you find to not consider using it. To me it certainly feels like a dirty Haskell which values your liberty very much.
Andrei Alexandrescu did a reddit IAMA recently (with Walter Bright contributing) and if you read through the posts I think you'll come away with the impression that D is very much here to stay (I certainly did anyway).
Cool, I'm going to have to check that out. It looks like the right way to do it; I've also examined the options and decided that code gen is the only way to do it.
One thing that I would actually accept for "go generics" is the go tool developing instead some ability for users to invoke the same sort of magic that cgo has, to run some code before the actual compile phase. (If this already exists, I haven't found it, but I'd love to hear about it.) Then instead of the language developing "generics", it can simply be an extension that people who care about use, and those who don't, don't, because as you observe, in the end this really is what "generics" are in most languages. That would also open the door to, for instance, some slightly more automagical database bindings that generate them from a schema or something, or anything else that involves generating types from metadata.
There's quite a few tools like that that I don't want to function at run time, and I'm perfectly happy to have code generation work with it rather than lard up the language with every feature under the sun.
> Then instead of the language developing "generics", it can simply be an extension that people who care about use, and those who don't, don't, because as you observe, in the end this really is what "generics" are in most languages.
You'll sacrifice reflection doing it that way. This also causes the same build time problems that templates cause in C++: you have to parse the generics over and over every time you want to use them. You also need to have some facility for collapsing duplicate template instantiations.
You really do need to have generics built into the language to do them properly. Preprocessors cause as many problems as they solve.
How? Everything produced by the preprocessor is available at runtime as much as anything else.
"you have to parse the generics over and over every time you want to use them."
I am not seeing how this affects Go, which does not do inclusion the way C++ does. You run the preprocessor once, it creates some files in the local Go package which you should not manually edit, and from that point on it is just as if you had created them by hand.
"You also need to have some facility for collapsing duplicate template instantiations."
This one might be true, but given the cost/benefit tradeoffs, I'll pay in some cases. It's not as if we're talking about a happy paradise in which not having generics solves all my problems; it visibly causes me to violate DRY in some of my code. DRY is the highest principle of software engineering.
Plus I'm not sure it's true. With what's being described here, you'll only be able to instantiate the generics for a given data type within the package that declares that data type, as it involves adding methods. The result may be that you in fact can't end up with duplicate instantiations; things outside the package can't add methods to data types inside a given package, and true duplicates are trivially detected at compile time. (This is in fact a limitation of the approach, but for this reason may still be a good one. I'm satisfied to solve 90% of the use case for generics if I can get it without dragging in the pathological stuff.)
> How? Everything produced by the preprocessor is available at runtime as much as anything else.
You can't instantiate a generic with new types at runtime using the reflection facilities, like you can with other objects.
> With what's being described here, you'll only be able to instantiate the generics for a given data type within the package that declares that data type, as it involves adding methods.
That's so limited as to be almost useless. It prevents me writing a hash table library, publishing it, and then you linking to that library and using your Widget type as a key.
No, because Haskell's type system (ignoring certain relatively experimental extensions) is actually quite simple. Many people may find it difficult to learn, but only because it is so different and alien and not because it's some baroque monstrosity.
In some ways, Haskell's type system is actually simpler than Go's: there's no sub-typing and hence no real casting. Haskell also tends to eschew special cases in favor of more general solutions; for example, Haskell does not need any magical notion of multiple return values built into the language because this is naturally subsumed by tuples. Similarly, Haskell has generics instead of baking a paltry set of data structures into the language.
Also, algebraic data types are actually very natural. Tagged unions (also known as variants or sum types), unlike normal union types, are dual to product types (structs and tuples). This means they are actually just opposite versions of the same underlying structure: very symmetric. And just like in physics, symmetry usually means you're onto something.
Haskell has an extremely simple and elegant core; just because it's difficult to learn for some people does not mean it's a particularly complex language. It's certainly simpler than Scala, for example! Certain extensions add quite a bit of complexity, but the most complex ones are not super-widely used so you can reasonably get away without ever dealing with them.
With a better type system, taking ides from Haskell (ADT, generics done well, ability to implement interfaces for a type outside of its package etc) Go could be the perfect "dirty Haskell" for me
I wonder if you might like OCaml (which is my sort of "less strict haskell"). There's also rust of course, but that's explicitly not for production use.
I find it surprising that OCaml is not more popular. It provides many of the advantages of Haskell (except purity) and is easier to reason about for most newcomers due to strict evaluation.
One of its problems is that its concurrency support is lacking. But that doesn't seem to be a problem for other languages to become popular :).
It's easier to reason about performance. Since both are high-level languages, chances are expressiveness and correctness matter more, and I've found Haskell better at both, even for beginners.
OCaml had some pretty specific downsides beyond just the GIL (which is pretty absurd on its own). Typeclasses are really big, and the module system can't make up for them. OCaml also had absolutely terrible tooling and an abysmal standard library, although this has been improving pretty quickly in the last few years.
Ultimately, after having used both languages quite a bit, I just find OCaml quite a bit more awkward than Haskell. OCaml's advantages just don't make up for this, although I really love some features like polymorphic variants and modules that don't suck.
I'd still use OCaml for a few particular tasks, like web programming with the incredible js_of_ocaml, but Haskell is generally my default choice both for programming and trying to teach people.
Yes, OCaml is really similar to Haskell (it is an ML after all), without purity. But there is no support for multicore, the implementation is not really being improved etc, so I decided against using it.
But their recommended workaround for writing generic containers is to use the empty interface and type-cast in and out --- which has all the drawbacks of the Java implicit boxing. And on top of that, it clutters up the code with type casts, adding injury to insult because the type-cast clutter actually costs you type safety. So, this has all the disadvantages of the Java approach and none of the advantages. Win?
At compile, there's one version of a generic method (say List<T>.Add). Compilation isn't slowed down like in C++.
At runtime, a generic method is compiled (on demand) into unique versions for each T encountered if T is a ValueType (int, double, etc. including user defined ValueTypes). This avoids boxing, like in Java.
Also at runtime, for all T's that are reference types a single common version is used with implicit casting as in Java.
C#'s draw back is that there is a small cost incurred on the first use of a generic method, and then an additional cost for each new ValueType used (if any). In practice this doesn't ever seem to be an issue, but it's there.
That sounds like a variation on rsc's 3rd approach (but with runtime code generation instead of boxing) with the same drawback: slower execution. It also makes certain kinds of debugging harder (how do you map a PC that's pointing to generated code? what if the code generation is the thing going wrong?).
C# on CLR is a VM approach, whereas Go compiles to native code. Concepts that are easy in the first (e.g. runtime code generation) often don't map particularly nicely to the second.
Why would you expect execution to be slower? If you invoke List<T>.Add(T) where T = int you end up running the same machine code that would have been generated if instead you had a method List.Add(int). There is a cost paid to generated that specific method, but it's on the first invocation. I'd expect it to be much faster than boxing, because you can elide a bunch of a allocations and generate tighter code since you have additional typing information at code gen time.
I suppose debugging could go sideways in theory, but it doesn't happen in practice on .NET and I doubt the go team couldn't figure out how to make it work.
It's worth noting that .NET never interprets bytecode, it always converts a method into machine code before execution. Go has already had dynamic code generation as part of it's implementation (I don't know if it's still around in 1.2), so it's hardly like it can't do the same thing .NET is.
Okay, then how does AOT works with C# templates? Really how? Do you have to pre-declare some known types ahead? (Or could it be that the AOT compiler scans all possible variants (doable since it's static language))?
AOT is a Mono extension to .NET, there are some cases it doesn't work and generics can cause some of them. See: http://www.mono-project.com/AOT for more details
For .NET proper, new machine code is generated at runtime. I believe (but do not know for sure, offhand) that it basically takes the IL definition of the generic method, slots the now known type into it, and JITs it to get the machine code it needs. I do know that there are couple bits of IL that are meant to make it possible to write the same code to deal with both reference and value types, which is probably related.
Here's a long quote by Rob Pike, which whether you agree with the points he makes or not, explains why Go does not have generics.
"Early in the rollout of Go I was told by someone that he could not imagine working in a language without generic types. As I have reported elsewhere, I found that an odd remark.
To be fair he was probably saying in his own way that he really liked what the STL does for him in C++. For the purpose of argument, though, let's take his claim at face value.
What it says is that he finds writing containers like lists of ints and maps of strings an unbearable burden. I find that an odd claim. I spend very little of my programming time struggling with those issues, even in languages without generic types.
But more important, what it says is that types are the way to lift that burden. Types. Not polymorphic functions or language primitives or helpers of other kinds, but types.
That's the detail that sticks with me.
Programmers who come to Go from C++ and Java miss the idea of programming with types, particularly inheritance and subclassing and all that. Perhaps I'm a philistine about types but I've never found that model particularly expressive.
My late friend Alain Fournier once told me that he considered the lowest form of academic work to be taxonomy. And you know what? Type hierarchies are just taxonomy. You need to decide what piece goes in what box, every type's parent, whether A inherits from B or B from A. Is a sortable array an array that sorts or a sorter represented by an array? If you believe that types address all design issues you must make that decision.
I believe that's a preposterous way to think about programming. What matters isn't the ancestor relations between things but what they can do for you."
You're right, kinda. It's been obvious from the start that this is a bell labs venture so to speak. I love the language, and assume the likes of Pike, Thompson, and Cox to know enough to know better. Community driven languages suck, so I leave it to those much more experienced than me to make these decisions, and I'm ok with that.
Disclosure: I've recently been designing a rich type system for representing hierarchical datasets, and I've also worked on a large production system written in Go, so I have some skin in this game.
> Here's a long quote by Rob Pike, which whether you agree with the points he makes or not, explains why Go does not have generics.
Does it? It seems more to explain why Go doesn't have C++-style classes.
In fact, this quote is almost like hearing the concept of metaprogramming being dismissed because C macros are a disaster.
> To be fair he was probably saying in his own way that he really liked what the STL does for him in C++
The C++ STL is hardly the ambassador of type theoretic solutions to problems of software complexity and safety.
> What it says is that he finds writing containers like lists of ints and maps of strings an unbearable burden.
Writing things once is not the problem. If you want to write a red-black tree of inegers, you don't want to write the whole thing again for some other totally ordered type. So you're left with casting your way out of the problem on top of a boxed implementation. Great, you've sacrificed DRY for safety and readability because a rich type system was too hard [1]. Note: almost wrote "proper type system".
> Perhaps I'm a philistine about types but I've never found that model particularly expressive.
Or perhaps Rob Pike just hasn't explored the relevant literature in enough depth. At one point he admitted that he didn't know that structural typing had already been invented previously! This isn't to criticize Rob, I find his talks fascinating, I think he's awesome, he's a friend of my boss, etc. But he's hardly the first hard-core hacker to be ignorant of the degree to which type theory has seen dramatic advances since the 1980s.
Just think of John Carmack, who after a very long and successful career using C and C++ is only now beginning to espouse the benefits of rich type systems and static analysis, going as far as to rewrite Wolfenstein 3D in Haskell.
> Type hierarchies are just taxonomy.
Here is the crux of the problem. Rob has silently switched from the ideal of "generic types" -- which 20 years of programming language research has taught us requires engaging with ideas like type classes, higher-kinded types, type covariance, and algebraic data types -- to the straw-man of "type hierarchies". Of course type hierarchies are crap. But that doesn't mean you have to return to the stone age.
Okay, well, structural typing isn't the stone age, but it's the beginning of the journey, not the end.
[1] to be fair, it is hard, Rust is still struggling to make the right balances, as Niko Matsakis's blog posts so entertainingly demonstrate.
This is definitely the wrong place for it... but I noticed with the vet and godoc move that I had trouble "go get"ting them.
I have GOROOT in /usr/local/src/go and root owns that as it's put there in our dev builds by our bootstraps scripts.
I have GOPATH in /home/{user}/Dev/Go
One cannot go get as root from GOROOT for system install, and one cannot trivially go get in the user GOPATH either. The latter is due to vet and godoc needing permission against GOROOT.
If you sudo the go get within GOPATH, then that works from the GOROOT permissions point of view, but then (once you get past Mercurial freaking) the bits of GOPATH/pkg and GOPATH/src that relate to vet and godoc and owned by root... making any subsequent go get of say, the HTML parser, fail because it cannot write to the folder.
I ended up doing this as a user to go get vet and godoc within GOPATH:
echo "[trusted]
users = *
groups = *
" > ~/.hgrc
sudo sh -c "export GOPATH=$GOPATH && export GOROOT=$GOROOT && export PATH=$PATH:$GOROOT/bin && go get -u code.google.com/p/go.tools/cmd/godoc"
sudo sh -c "export GOPATH=$GOPATH && export GOROOT=$GOROOT && export PATH=$PATH:$GOROOT/bin && go get -u code.google.com/p/go.tools/cmd/vet"
USER=$(whoami)
GROUP=$(id -g -n $USER)
sudo chown -R $USER:$GROUP $GOPATH
It works, but there is a better way, right?
Do the docs assume Go source installs are user specific?
Beg your pardon, jumped the gun myself there. Misread a comment on the IRC channel: "unless there's some massive bug, I doubt we'll see the tip of the tree change"
People using lots of goroutines should note the change of minimal stack from 4KB to 8KB.
“Updating: The increased minimum stack size may cause programs with many goroutines to use more memory. There is no workaround, but plans for future releases include new stack management technology that should address the problem better.”
This is a runtime/debug call to restrict the max size of stack. Somehow I feel like there should be one to set the min size too…
It looks like 1) generics are not very well aligned with the main design goals of Go and 2) it would take a very well
thought out design to align them so, if that's possible at all.
From the FAQ: "Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it.
Meanwhile, Go's built-in maps and slices, plus the ability to use the empty interface to construct containers (with explicit unboxing) mean in many cases it is possible to write code that does what generics would enable, if less smoothly.
I've seen this argument several times, and I'm not sure I understand it. The recommended workaround (empty interface and explicit unboxing) is, at runtime, pretty much how Java generics work: the parameter types get "erased", and the bytecode operates on Object (the closest equivalent to Go's empty interface) no matter what they are. But at compile time, the compiler infers the (obviously necessary) typecasts, so, you don't have to explicitly clutter up your source code with them. That strips away a layer of crud which obscures the underlying logic of the code, making it harder both to read and to write.
The other constant line from the Go implementers is something like "generics have proven troublesome in [unnamed] other languages, so we don't want them until we're sure we can get them right". This may be a reference to C++, in which compiling templates turns out to be a real pain. But at this point, the literature is full of better ways to do it.
In this example, one function probes the underlying type by using t.(type), and the other one uses a type assertion by doing a val, ok := t.(int) - The type assertion will return the value of the integer and true if t happens to be an integer (a Go function can return multiple values)
If you pass an object as an "empty interface{}" you can test the underlying type.
If you pass an object as an "empty interface{}" you can test the underlying type.
So, the answer is that it is pretty much the same as in Java, since there you also use reflection to get the type of Object:
if (o instanceof String) {...}
The type assertion is just syntactic sugar.
The difference is that Java stores the type in the actual instance, while Go stores the type in the 'reference' (which is normally a type-pointer tuple). Unfortunately, Go's approach can lead to the situation where nil (null) is not nil, when two references have null as its pointer, but different types.
Is this purely evaluated at runtime or does Go annotates types with its static analysis and restricts the interface in some ways? IOW are default and else cases made optional by Go's static analysis, which would help Go throw its arms up at compile time if an unexpected type is passed?
> Is this purely evaluated at runtime or does Go annotates types with its static analysis and restricts the interface in some ways?
Interfaces in a nutshell:
Go checks at compile-time that any concrete variable passed where in interface is expected satisfies the methods of that interface.
Conversely, Go checks at compile-time that the program never attempts to invoke methods on an interface value that are not guaranteed by the interface.
In this example, there is a compile-time error (not a runtime error), because we attempt to call a non-guaranteed method on an interface value, even though the underlying value satisfies this interface: http://play.golang.org/p/EaQQpv-NAW
This provides type safety: if your program compiles, you know that you will never run into a runtime error by trying to invoke an undefined method.
When you do type assertions, you're using reflection (ie, http://golang.org/pkg/reflect/) to determine the underlying value at runtime.
However, remember that using type assertions essentially sidesteps the benefits of having interfaces. Interfaces allow you to invoke function calls with type safety on values of unknown type, as long as it is known that the underlying type provides the required set of methods.
I'm pretty sure they will never happen. The designers of Go don't seem at all interested in having them, they show no sign that they are actively studying the problem and exploring how other languages solve it. And as Go becomes more and more popular, there is less and less incentive to make a major language-breaking change, so as far as I'm concerned, if you want type parametricity, you're better off looking elsewhere.
Are there any plans to self-host the Go compiler in Go itself? Rust is self-hosted in Rust (originally bootstrapped by a Rust compiler written in OCaml). I've found a couple Go compilers written in Go that bind to LLVM backends, but they are just prototypes.
The Go developers make a big deal about compilation speed. Wouldn't Go's goroutines have some interesting possibilities for parallelizing compilation?
There are some in Stockholm. We build our back end in Go and will hire a back end dev soon. In addition to us I know Spotify are building some things in Go, a friends company are also building their back end in it.
Yes, though for an internal project so I can't share it here. Go is pretty well suited for building REST APIs and combines quite easily with a front-end JS app built in Angular.
The two are rather separate. The only way I can see combining them is if there is extensive server side template-ing. But angular.js kind of discourages that. So you can use go to build a RESTful server and then use angular.js on the front end. But that sort of de-couples the system to the point where it doesn't make sense to talk about an advantage of a combination of both.
Having myself written such a site (http://www.goread.io/), I concur. Having go on the backend didn't provide any benefits with an angular frontend over another language. I just think they excel at their respective purposes - that's why I chose them.
The lack of generics in Go takes away most of the fun from coding after a couple of month. Despite what is being said, interfaces do not lessen the pain.
With a better type system, taking ides from Haskell (ADT, generics done well, ability to implement interfaces for a type outside of its package etc) Go could be the perfect "dirty Haskell" for me, but now it is just a meh language with a half assed implementation.
That being said, it is still a much better choice then a lot of overcomplicated languages out there.
EDIT:
The biggest practical problem with not having generics is the fact you do not have access to list (slice in Go) manipulation functions. This means you have to implement them yourself, on the fly, for the given type when you need it. This does increase the time it takes to implement something, but also can introduce a lot of bugs as well - you can't rely on (almost guaranteedly) bug free implementations of existing functions.
Of course this runs much deeper than just the list type, but on day-to-day basis I feel this to be the biggest pain. But the rabbit hole is deep: we could get into how the multiple returns values are just a really inconvenient (not composable) workaround of not having a generic tuple type...