Hacker News new | comments | show | ask | jobs | submit login
My Go Resolutions for 2017 (swtch.com)
329 points by mitchellh on Jan 18, 2017 | hide | past | web | favorite | 197 comments



    > Part of the intended contract for error reporting in Go
    > is that functions include relevant available context,
    > including the operation being attempted (such as the
    > function name and its arguments).
I know the Go folks don't like exceptions, but this is an example of them learning the hard way that about one useful thing they lost by deciding to not do exceptions.

Exceptions give you stack traces automatically. All of that context (and more) is there without library authors having to manually weave it in at every level of calls.

    > Today, there are newer attempts to learn from as well,
    > including Dart, Midori, Rust, and Swift.
For what it's worth, we are making significant changes to Dart's generics story and type system in general [1]. We added generic methods, which probably should have been there the entire time.

Generics are still always covariant, which has some plusses but also some real minuses. It's not clear if the current behavior is sufficient.

Our ahead-of-time compilation story for generics is still not fully proven either. We don't do any specialization, so we may be sacrificing more performance than we'd like, though we don't have a lot of benchmark numbers yet to measure it. This also interacts with a lot of other language features in deep ways, like nullability, whether primitive types are objects, how lists are implemented, etc.

[1]: https://github.com/dart-lang/dev_compiler/blob/master/STRONG...


Having just spent the last two months writing Go code, exceptions are the thing I miss most (well, besides the ternary operator and map/reduce operations). Not only are errors painful to debug without stacktraces, but every single method call is followed by three lines of "if err != null {". I am amazed that folks tolerate the sheer amount of repetitive typing required by the language.


Changing the language is like changing the country residence. Some things take long time to get used to, unless you're highly mobile traveller :)

I couldn't imagine myself returning to languages with exceptions and ternary operators, for example.


> I couldn't imagine ...

Reminds me of Rob Pike's post: https://commandcenter.blogspot.fr/2011/12/esmereldas-imagina...


Wow. What an unpleasant way to respond to criticism. Being a programmer does not require the ability to hobble your thinking. I can easily imagine programming in a language without generics - but I'd still be programming with generics, I'd just be expanding them in my head (or, if I was writing enough code, in my own implementation of generics) rather than using a built-in implementation.


> I couldn't imagine myself returning to languages with exceptions and ternary operators, for example.

Why ? Is it because no language with those feature is good (let say you wouldn't like to return to Java for instance), or is it because you genuinely prefer this over ternary operator :

    if expr {
        n = trueVal
    } else {
        n = falseVal
    }


Well, from my 15 years experience for each nice example of ternary operator being useful there are 10 more examples where it's being used improperly and makes code hardly readable.

I know that the typical answer is "it's just bad programmer", but that's where Go chooses different view. Many design decisions in Go were taken with awareness of the social context of programming.

If the feature incentivizes using it in a wrong way - it's not a bad programmer, it's a bad feature. Programming language is a language between humans in a first place and it should be readable as much as possible, it should deincentivize using obscure and easy-to-use-improperly things.

It so much better when you go to practically any Go repository and find that code is readable and clear. I've never had this experience with any other language before.

PS. Found one of the real examples of abusing ternary operators. That was one of the coolest I've seen. And I guess it's easy to understand that the same programmer wouldn't write the same with "if .. else .." blocks in Go. He would realize that it's too verbose and hard, hence probably wrong design, so he need to find better solution within the space of language features. http://imgur.com/a/hjKOe


> If the feature incentivizes using it in a wrong way - it's not a bad programmer, it's a bad feature.

So there's nothing good about alcohol (it may cause liver damage), cars (they may cause accidents), prescribed drugs (they may have unpleasant collateral effects), ...

Hey, it's also true that programming incentivizes bugs, so programming is bad. Let's stop programming!


It's weird how people miss the conditional operator but not switch case fallthrough.


> It so much better when you go to practically any Go repository and find that code is readable and clear. I've never had this experience with any other language before.

What languages did you have to endure before?

> Found one of the real examples of abusing ternary operators.

Yeah, the original code is silly, the binary search trees are not always balanced. They should be more like:

    if (type <= 10) ? (if (type <= 5) ? ... : ...) : if (type <= 7) ? ... : ...


It's worse that that, since you need to declare the variable as well. It's the "var n int" piece that bugs me, since sometimes you're moving code around and have a n := trueVal hidden in there.

    var    n int
    if expr {
        n = trueVal
    } else {
        n = falseVal
    }


Go has a ternary if, but the syntax is weird ;-)

    n := map[bool]func() int{
            false: func() int { return falseExpr },
            true:  func() int { return trueExpr },
    }[expr]()

https://play.golang.org/p/TXIse32WD9

Now, let's see how to abstract it with "go generate"...


I genuinely prefer

    n = if expr trueVal else falseVal
(which is what e.g. Scala does) over meaningless-unless-memorized '?' and ':'


In most cases this will boil down to:

    n := defaultVal
    if expr {
      n := otherVal
    }
Which I think is easier to understand, and not that much more typing. YMMV of course.


If defaultVal is a constant, this is OK, but what if it requires computation?


I think typing is the easiest part of programming.

Reading, maintaining and improving legacy codebases are some of the hardest.

If that extra typing makes it much easier to do the above tasks, I don't mind doing this.

Go runtime panics include stack traces, complete with the offending statement's line number in the source code.


> I know the Go folks don't like exceptions, but this is an example of them learning the hard way that about one useful thing they lost by deciding to not do exceptions.

The first thing I do in a Go project is reimplement exceptions it seems. It isn't even so much the stack trace, but that the _cause_ can be chained on. Often times one error causes another and the error interface in Go is too weak to capture it.

Rob Pike has his blog post about errors being values, but it's basically useless because the standard library hardly uses the more advanced error types. Pretty much every library returns error instead of MoreAdvancedError, which means you are doomed to speaking the lower common denominator.

(for the record, promising in the documentation an error will always be some type is pretty weak).


There was a proposal at some point to integrate github.com/pkg/errors into the stdlib, since it's pretty much a drop-in replacement to the current errors package, with extras. One of them is that errors contain stack traces that can be printed out if needed. Pretty useful, and still not an exception.


For errors with stacks in golang today, you could try the Meep [1] library.

It's a library for More Expressive Error Patterns. You can declare error types like this:

  type ErrFrobnozMalformed struct {
       Frob *Frobnoz
       meep.TraitTraceable
  }
... and any type you compose with a meep trait like that gets superpowers, like automatically attached stacks.

I'm the author. I don't think it's perfect -- in particular you really can't avoid a certain amount of boilerplate :( -- but with meep you get stacks, and you get custom error types, and that's worth a lot to me.

Whether or not you use this code, the idea that might be a useful takeaway is the fact that the stack-capturing behaviors (and others) are a trait that you can "mix in". Whether a stack is appropriate depends on situation. Some errors are fairly regular (e.g. certain kinds of IO halt) and putting a stack on them is not useful (and is CPU-costly). This doesn't necessarily follow any sort of direct inheritance tree. (Java has started doing something similar with adding Even More parameters to exception constructor for e.g. 'capturestack=false'.) I think this is an important point: errors often should have stacks, but not always.

I'm still hugely looking forward to seeing what the Go authors do in the future to make errors smarter. Doing the right thing should be easy, and it's almost impossible to strap something this essential on with a library: special syntax and compiler support for informative errors is warranted.

---

[1]: https://godoc.org/github.com/polydawn/meep


I think Swift got almost everything right, except for not having a GC maybe. Especially everything related to exceptions: the combination of "throws", "defer" and "try"/"try!" gets rid of most of the classic problems with exceptions.


> Generics are still always covariant, which has some plusses but also some real minuses.

How is this even possible? Does the compiler just fail if you try to put a type parameter in contravariant position? Or does it allow it and then blow up at run time?


Tangent: Is it a good thing for programming that I have to wrap my mind around this new term called contravariant?


I'd say "yes". But generally you don't have to (except in edge cases), library authors do. You benefit from stronger type guarantees.

It's a bit easier to think about it as ">=T" vs "<=T" though.

In one direction, you allow T and any subclass. This is probably the most common - all T subclasses should act like T, e.g. have method x() on them. You can assign a T subclass to a T variable or return it from a T-returning method. When you add something to a List<T>, it could be a T subclass, because it's "at least a T". (covariant)

In the other direction, you allow T and any superclass. My main way to think of this is for a callback, e.g. for declaring a `map` operation. If you're mapping over List<T>, you can't declare your callback as accepting a subclass of T because the list only contains "at least T". But you can accept a superclass (e.g. Object), because any T (or subclass) has that superclass. (contravariant)

---

If you don't have support for contravariance, you either give up flexibility (you can't make a reusable Object mapper) or safety (you can't guarantee the supplied callback is safe to call).


Why do I have to use OO? Why can't I just call the functions I need?


If there's no type hierarchy whatsoever, then you don't get the benefits of polymorphism. Which is the case in some languages - for those, you typically get looser call semantics (no checks, or duck typing or something) or forcing you to use e.g. match statements everywhere to do the same thing in N branches when you have N types in a list.

With functions and a type hierarchy of some kind (or implicit conversions, or whatever) you have the same kind of issues. When you declare the type of your map function (explicitly or implicitly) you're still bounded by the type you're mapping over. If you make a "(float x, returns float) x + 1e10" function, you can't use it to map over a list of doubles, because they can't be safely reduced in precision. The reverse works though, because floats can be promoted to doubles safely. You essentially have "double" as a subclass of "float". Whether it's OO or not has nothing to do with type hierarchies, OO just embraces them with reckless abandon.

Good type inference systems can hide a lot of this from you, allowing you to drop types most of the time and let the compiler specialize it / make sure it's safe to do this particular thing. But they can fail. When they do, how do you ensure safety?


Can I just use macros? Passing the type as a parameter to a macro can let the compiler match types, no?


Learning the term before the concept is putting the cart before the horse. But once you find yourself dealing with the fact that e.g. one can use a Source[Float] as a Source[Number] and not vice versa, whereas one can use a Sink[Number] as a Sink[Float] but not vice versa; one can use a Function[Number, Float] as a Function[Float, Float] or a Function[Number, Number] and either of those as a Function[Float, Number], but not in reverse, then it's useful to have words for these relationships so that we can talk about them.


I'd say "yes" only because this concept will exist whether or not you have a name for it. And it's a concept you will bump into in any language with any sort of generics (even Go [1])

I wrote Scala for the last 2 years and was the resident type system "expert", so I had to explain variance to people pretty often. Luckily, it's a pretty quick, mathy definition:

Some notation first:

    A <: B means "A is a subtype of B"
    A >: B means "A is a supertype of B"
    F[_] refers to a unary type constructor.
Now for the definitions:

    If F[_] is "covariant", it means that if A <: B, then F[A] <: F[B]
    If F[_] is "contravariant", it means that if A >: B, then F[B] <: F[A]
Examples of covariant type constructors are List and Functions in their output. An example of a contravariant type constructor is Functions in their input type.

^ This is all that needs to be said! I can write it on a whiteboard in ~5 minutes. I'd say that's a reasonable thing to be expected to learn.

[1] https://www.reddit.com/r/golang/comments/3gtg3i/passing_slic...


i think the way you defined it, covariant and contravariant are identical; in your contravariant definition, swap the names of "A" and "B" and you get:

  if B >: A, then F[A] <: F[B]
and B >: A means the same as A <: B, so this means:

  if A <: B, then F[A] <: F[B]
which is the same as your covariant definition.

Perhaps you meant:

    If F[_] is "contravariant", it means that if A <: B, then F[B] <: F[A]

?


It's only meaningful if you are using a language that has static types, subtyping, and generics. That covers Java, C#, and Dart, but omits Ruby (no static types), SML (no subtyping), and Go (no generics), for example.

If you are a language where it matters, it comes in handy, even though you usually aren't aware of it. You probably already have a correct intuition of it without realizing it.

Say you have a class like:

    class Enclosure<T> {
      void cage(T item) { ... }
    }
And assume some sort of class hierarchy like "Mammal is a subclass of Animal". If you have a method like:

   putKittyInCage(Enclosure<Mammal> enclosure, Mammal kitty) {
     enclosure.cage(kitty);
   }
Is this OK?

   putKittyInCage(new Enclosure<Animal>(), kitty);
The method expects an Enclosure<Mammal> and we're giving it an Enclosure<Animal>. Is that allowed? You probably intuitively see that it should be — an Enclosure<Animal> can hold any kind of animal and mammals are all animals. Your intuition is right.

"Contravariance" is the term to describe precisely what that intuition represents.


No, you can use a type parameter in a covariant or contravariant position. What it means is that it treats all generic classes as covariant with respect to their type parameters, even when the type parameter is used in a contravariant position.

So, for example, List<T> is covariant—you can assign a List<int> to List<Object>—even though add() takes a T. This isn't statically safe, so the language inserts runtime checks to ensure you don't break soundness.


> Exceptions give you stack traces automatically. All of that context (and more) is there without library authors having to manually weave it in at every level of calls.

I find exceptions really useful but also, I think that with exceptions people tend to loose very useful panic/error dichotomy. I saw projects without a single "panic" in the code. All those projects gravitated towards dumb error handling mechanisms aka "log all errors and continue".


Exceptions don't always give you useful stack traces in a concurrent situation, because the current stack may only reflect a goroutine that's processing data on behalf of another. The real execution context may involve many more.


Along with generics, they should probably also reconsider algebraic data types, such a enums with values. This is the best feature swift adds to the table hands on, and it seems to me as it's pretty orthogonal to the rest of the language ( although it carries a lot of other features with it, such as pattern matching).

They wrote that they considered it to be redundant with interface programming, but really i don't understand why. Interface is about behavior, not data. An int doesn't "behave" like one, it is one. And something that's either an int or an array of string, doesn't "behave" like anything you'd want to describe with an interface...

As an example, one should see how protobuf "one of" messages are dealt with in go : switch on arbitrary types followed by manual typecasting. That's just gross...


It's redundant because Go already has type-switch and type-assertions. Your comments about ints vs arrays vs behavior misses an important fact: An object of any type can be promoted to interface{} (aka "dynamic") and then may be "pattern matched" on via `x.(type)`. Sure, it's pretty crummy pattern matching to only be able to dispatch on a single tag, but there are some fundamental problems with traditional algebraic data types and pattern matching:

1) Abstract data types encourages closed systems. You may view this as a positive: It enables exhaustiveness checks. But I view it as a way to make your program more fragile. Go's type asserts let you convert to interface types, so you can add new interface implementers later and not have to go fix-up old type-assertions.

2) First-to-match pattern matching complects order with dispatch. Each match clause has an implicit dependency on _all_ of the clauses before it, since you can match Foo{x, 1} and if that fails match Foo{x, y} where you know now that y != 1. This is sometimes useful, but as your patterns grow larger, it's simpler to just match Foo{x, y} and then branch on y == 1. A series of type-asserts with if statements has a little bit of order dependency on it: interface clauses and named wrapper types are first-to-match, but type-switch on struct clauses are completely order independent because there can only be one concrete representation underlying an interface{}.

3) Relying on positional fields causes two classes of problems: A) it's harder to grow your system later, since every pattern match needs to mention the new field you added and B) you can't _not_ mention a field by name (or at least give it a placeholder name) at every use as well. This is the same issue as the Foo{x, y} vs Foo{X: x, Y: y} notation. It's considered good practice in Go to use the later, since it's more future proof, ie Foo may grow a Z field and it will be initialized to zero.


> so you can add new interface implementers later and not have to go fix-up old type-switches.

Or, rephrased: the compiler doesn't help you find the old type-switches that need new branches.

As you say, being closed gives exhaustiveness checks, and this is often a good thing: it's harder to accidentally not handle new cases in existing code (or not handle existing cases, when writing new code), and it makes refactoring much easier. For some code the flexibility of being open isn't the right choice, just like for some code, it is.

> First-to-match pattern matching complects order with dispatch

This seems orthogonal to ADTs: one can retain retain behaviour like Go's current switch, and only allow switching on the variant, not doing more detailed pattern matching. This retains order independence.

> Relying on positional fields causes two classes of problems

Again, totally irrelevant: one can pattern-match using the names of fields.


> the compiler doesn't help you find the old type-switches that need new branches

Presumably you're adding to a "closed" set of types, so it's quite easy to grep for the type names in that type set. I just don't find this to be that much of a problem in practice. Never mind the fact that wildcard clauses exist and so this problem comes up in ML/Haskell/etc as well, since the compiler will consider those patterns exhaustive.

> only allow switching on the variant

Go already has this. It's called type-switch:

https://golang.org/doc/effective_go.html#type_switch

> one can pattern-match using the names of fields

Of course they can, but traditional algebraic data types impose order on to fields, which is an undesirable feature IMHO. However, field dispatch loses the nice order-independence attribute.


> Presumably you're adding to a "closed" set of types, so it's quite easy to grep for the type names in that type set

Yes, that's an option, but it's not nearly as nice as the compiler automatically telling you what's up, nor does it help downstream users of a library that adds a new possibility. They just have to hope the change is communicated to them.

> Never mind the fact that wildcard clauses exist and so this problem comes up in ML/Haskell/etc as well, since the compiler will consider those patterns exhaustive.

This is an explicit vs. implicit situation: by using a wildcard, the programmer has explicitly chosen to give up some compile-time assistance, whereas Go implicitly makes this choice for the programmer.

> Go already has this. It's called type-switch:

... I don't see how a facetious response like this is at all helpful. The downsides of type-switch are exactly what we're already discussing.

> traditional algebraic data types impose order on to fields, which is an undesirable feature IMHO

Go is its own language, and can choose to adopt/adapt features to suit its idioms. Referring everything back to how "traditional ADTs" work seems silly when there's trivial tweaks that fit Go better.

> However, field dispatch loses the nice order-independence attribute.

I don't know what you mean by this (I can guess, but I don't see how it relates to naming fields in patterns), could you rephrase?


> nor does it help downstream users of a library

If you add a new type to a sum type in a statically typed language, that's a breaking change for all downstream consumers. Again, you may view this as a benefit (clients get compiler errors about new cases to handle!), but I view it as a drawback (clients can't upgrade until they handle all new cases!).

> Go implicitly makes this choice for the programmer

And I believe that Go (and Clojure, which omits both types and pattern matching for reasons including those I'm discussing) makes the right decision for the programmer, as openness to future changes without breakage is better > 90% of the time.

> could you rephrase?

You said "one can pattern-match using the names of fields", I took that to mean matching on something like Foo{x: 5}. If you do that and also have a clause Foo{y: 10}, now you have ambiguity and the order you compare x==5 or y==10 first matters, since you might have an object Foo{x: 5, y: 10} which would match both clauses.

If you just meant that you could do Foo{x: x} with blanks, that's fine, we're in agreement then. Clojure offers {:keys [x]} in destructuring for this purpose, and it's great.


For both of the first points: I don't think anyone's proposing that closed sum types be the only polymorphism in Go (removing the existing open polymorphism would be a wild breaking change and so isn't even a remote possibility), so this would be a feature for when alerting consumers that something has broken is good. When flexibility is what your library needs, use interfaces, when reliability (or performance[0]) is what it needs, use ADTs.

One argument against adding the new feature is it forces people to think about which to use, and they may choose an inappropriate one. (This choice argument is far stronger than vague concerns about the feature possibly resulting in breaking changes.)

In any case, libraries can already make breaking changes, and presumably quite a few want to/do (so it's not like having a feature that can also result in breaking changes is anything new), and, adding a new type to existing open set can easily be a semantic breaking change that stops code from running correctly, even if it doesn't stop code compiling.

> You said "one can pattern-match using the names of fields", I took that to mean matching on something like Foo{x: 5}. If you do that and also have a clause Foo{y: 10}, now you have ambiguity and the order you compare x==5 or y==10 first matters, since you might have an object Foo{x: 5, y: 10} which would match both clauses.

No, that is orthogonal, as I also discussed in my original reply.

I was visualising something like `Foo { x: binding }` in the pattern, which would make the value of the x field available under the name `binding` (and, for convenience, `Foo { x: x }` case could be allowed to be abbreviated to `Foo { x }`). Whether the binding on the right-hand side of a field allows pattern matching or not is its own discussion.

[0]: ADTs aren't forced to allocate, and possibly use a more efficient switching scheme (it's just an integer for an ADT; I don't know the implementation details of Go's type-switch so it might be that fast).


I'm going to dodge the breaking changes discussion, since it's a huge tangent, suffice to say that I greatly dislike _breaking_ changes and want languages that make it possible to avoid them or at least provide a migration path via incremental deprecation prior to removal. Go fails here in many ways. All I meant to do was point out that exhaustiveness checks are not without their tradeoffs.

> I was visualising something like

OK then. We're in agreement on that point. This is what Clojure has in it's destructuring syntax, and, as I said, it's great.


> I'm going to dodge the breaking changes discussion, since it's a huge tangent,

How is it a tangent? One of the biggest reasons to want exhaustiveness checking is to avoid the even uglier problem of "compiling but semantic changing" changes.

> I greatly dislike _breaking_ changes and want languages that make it possible to avoid them

It seems you are only talking about disliking breaking changes that cause your program not to compile whole totally disregarding the danger of changes that compile but change semantics.

As respectfully as possible, that seems backwards to me.


> All I meant to do was point out that exhaustiveness checks are not without their tradeoffs.

And all I meant to do was point out that lack of exhaustiveness checks is not without its tradeoffs. :)


Imagine an enum describing the various states of a state machine. I'd definitely prefer the compiler to refuse compiling, rather that hoping to be future proof by ignoring a new state.


> If you add a new type to a sum type in a statically typed language, that's a breaking change for all downstream consumers.

Yes, but that's true regardless of whether the compiler tells you or not. The question is whether adding the value to the set of types that can be returned breaks at compile time or at run time.


> If you add a new type to a sum type in a statically typed language, that's a breaking change for all downstream consumers.

Not true, consumers that have an explicit "catch-all" pattern at the end will work normally


I'm thinking pretty much the opposite regarding fragility and growing your system. The fact that adding a case in the enum have the compiler tells you all the places your programm needs to handle the case is a tremendous bonus !

The problem is that we're not talking about the same things. You're talking about making new things behave the way the an existing interface wanted, and i'm talking about a data structure updating.

Once again, interface is not data..


> the compiler tells you all the places your programm needs to handle the case is a tremendous bonus

Except this is simply not true if you have any wildcard patterns in any of your matches. Besides, it's easy to grep for one or each of the names of the types in your closed sum. Which is what you're going to do anyway if you have any wild card patterns!

> interface is not data

I don't know what you intend to communicate by saying this.


"this is not true if.." Well, yes indeed, you can ignore features of a language, but i don't see how it prooves any point. As for the grep, i really have no idea what you're talking about ( maybe advanced pattern matching features of advanced language ?). I'm really talking about something very basic, like what swift is doing.

My point about interface vs data is that those are two different things, and that i don't think you can't try to solve an issue on type compositions with a feature working on functions only.


> 1) Abstract data types encourages closed systems. You may view this as a positive: It enables exhaustiveness checks. But I view it as a way to make your program more fragile. Go's type asserts let you convert to interface types, so you can add new interface implementers later and not have to go fix-up old type-assertions.

Is the idea here that all types should be open? Should I be able to e.g. add 1.5 as a possible instance for the Int and String types?

Interfaces should be open, but one also needs the ability to represent closed datatypes.


Fragility? Those exhaustive compiler checks are anti-fragile. There's a binary - either your compiler tells you early on where you need to adapt your code, or your automated test suite / customers will tell you in integration / production. Your code is likely to break either way, the question is how early / cheap would you like the fix to be. Code which is cheaper to fix is less fragile QED


I think they thought of the standard OOP solution to algebraic data types. Rather than pattern match on the value and then do the work, you dispatch on the value through an interface and do the work. I don't know go so here's some pseudo code:

    type Foo = { x : int }
    type Bar = { y : string }

    interface DoStuff { doStuff() -> void }

    function Foo.doStuff() {
        ...do stuff with Foo.x
    }

    function Bar.doStuff() {
        ... do stuff with Bar.y
    }

    function main() {
        value := getSomethingThatImplementsDoStuff()
        value.doStuff()
    }
Compare this to ML:

    type DoStuff = Foo of int | Bar of string

    let main =
        let value = getSomethingThatReturnsDoStuff ()
        match value with
        | Foo of x -> ...do stuff with x
        | Bar of y -> ...do stuff with y


This is basically the expression problem: https://en.wikipedia.org/wiki/Expression_problem

The expression problem is motivated by the fact that the OO style (which Go favors via interfaces) makes adding new cases easy (just add another subclass) but adding new methods hard (you have to modify every subclass), while the pattern matching style makes adding new methods easy (just add another match statement) but adding new cases hard (you potentially have to modify every match statement in the program). Depending on the task at hand, either one may be more advantageous. The solutions to the expression problem are attempts to get the best of both worlds.

A lot of programming language designs deny that this is a tradeoff, which I think is misguided. Sometimes it's more convenient to be able to add methods easily, and sometimes it's more convenient to be able to add cases easily.


My understanding is go's interfaces is counter to OO style. It's about behavior, not objects, plus by recommending to users that you write the smallest possible interface as you can, and then compose other interfaces out of those smaller one's, you reduce the surface area of where the expression problem can come into practice!


Go's interfaces are pure subclassing for the purposes of the expression problem.


Substitute implementation for subclass and the point still stands.


Indent 4 spaces for code.

Here's another way to do this in Go:

    func f(obj interface{}) {
            switch obj := obj.(type) {
            case Foo:
                    // Do stuff with obj.x
            case Bar:
                    // Do stuff with obj.y
            default:
                    return fmt.Errorf("obj is of unexpected type: %T", obj)
            }
    }


Unfortunately, this makes a runtime error from something that could be caught at compile time.


It's possible to make closed interfaces in Go (give it an uncapitalized method), so there could be a compile-time warning when you don't handle all possible cases in a type switch on a closed interface.


Maybe this is possible (and I'm not sure it is--this seems like the sort of thing struct embedding would break), but it still doesn't fix that interfaces incur an alloc. Further, it's a shame to abuse interfaces like this; it's not very ergonomic to define a method on a type to denote its membership in a set when we could do something like `type Foo enum { int, Bar, string, []byte }`. None of this is to say implementing real enums in the language is an easy endeavor; only that interfaces are a poor substitute.


It is possible, and struct embedding does not break it because struct embedding only enables composition. There is no inheritance, so you can not embed a struct that has something private in it and then "inherit" it and override the private stuff. There is no such thing as inheritance in Go. You can "decorate" methods (in the GoF patterns sense), but the internal implementation that you are decorating only gets the instance of itself, not the decorating class.

I'm not addressing your other points, just your first sentence, since that is a matter of fact I can concretely address.


> struct embedding only enables composition

Not sure what you mean here. Struct embedding is composition. It enables delegation of methods, which allows the outer struct to satisfy the same interfaces as the inner struct. This necessarily means you can't create closed interfaces. See for yourself: https://play.golang.org/p/akn9rir5of

This holds even if the outer and inner structs are defined in different packages.


This actually sent me down quite the path of discovery, which yielded to me a couple of surprises which I may have to document somewhere else. But the answer to your question is that you can seal an interface by declaring a private interface that has a private method. If you do that, nobody can construct any type that will implement that, even if they peek in the module and look at the private method name. The compiler will yield up something like:

    src/b/b.go:16: cannot use b (type B) as type 
       a.private in argument to a.CallPrivate:
            B does not implement a.private (missing a.privatemethod method)
                    have privatemethod()
                    want a.privatemethod()
A public interface with a private method does behave a bit interestingly. You can create new structs that compose in the implementation, and if you have a function (not method, function) in the private package that tries to type match on them, it can indeed end up with a type unknown to your package. However, you still have not necessarily "opened the interface" because it remains impossible to ever call a private method on that interface that was not defined in the original package. I haven't tested what public methods do.

I was able to construct a struct that in some sense had two different methods which could both be called under different circumstances with the same short method name, which is a bit weird.

So, in Go, you can completely close an interface such that you'll never get a type you don't expect, or you can close an interface in a way that means you may get an underlying type you didn't expect but the method will still be guaranteed to come from your package.


I believe that you can seal an interface by making it and its method(s) private, but that doesn't do us much good because we're talking about using this to build something that looks like an enum (complete with compiler warnings when we fail to address a branch in a type switch). If you only intend to use your "enum" in the package, then the interface isn't sealed in the scope you want to use it. If you want to use the enum outside of your package, then you can't use the private interface at all.

So you can create a "sealed interface" only in the sense that you can't use it at all outside of its defining package, and it won't be sealed inside of its defining package. I can't see a way to build an enum from this in the way the OP described.


Thanks for the formatting help!


I see where you are coming from, but that isn't the go way.

Switch on arbitrary types, followed by typecasting? That's the go way. No surprises. Explicit instead if implicit behavior.


If anything, ADT's would make it even more explicit. And pattern matching will make it impossible to miss cases.


If you have a look at swift, it's also very explicit, and non-magical ( which is why i like it so much). The difference with go is that you can't make any type error when unwrapping the enum, because the compiler knows what are the different possibilities.

I see no reason for go not to adopt it, in all honesty. it's nothing like generics, because it doesn't seem to add complexity to the rest of the language ( imho ).


> you can't make any type error when unwrapping the enum

This is also true in Go with type switches:

    switch y := x.(type)
        case MyStruct:
And with type assertions:

    y, ok := x.(MyInterface)
In both cases, y is of the correct type. The ok is optional, if omitted and the assertion fails, you'll get a runtime panic.


y is of the type you asked for, yes, but that isn't necessarily the type that should be there if the programmer makes a mistake, or if the underlying code changes. With an enum, the types are written in one place, and the compiler connects the dots, giving compile-time errors about mistakes/mismatched expectations. This similar to how one can call `foo(x)` and the return value's type is known, no need to first cast the function to the type the programmer thinks it should be. The casting pushes type errors to execution-time rather than compile-time.


> The casting pushes type errors to execution-time rather than compile-time

Which is exactly what pattern matching does, which is what I was responding about.


Ok, i'm starting to understand what we're disagreeing on. You're talking about pattern matching with dynamic expressions ( don't know the official name for that), such as what you find in erlang, i'm talking very basic enum unwrapping based on types, which is compile time checked. Swift also adds some kind of dynamic expression to patterns, which may break exhaustiv checking, but in the case of go, just a simple type check would be a great start.

Edit ( since i can't reply) : just look at https://developer.apple.com/library/content/documentation/Sw... with something like the barcode example.

It's all compile time checked.


> very basic enum unwrapping based on types, which is compile time checked

Sooo just accessing fields on a struct? I don't understand the distinction you are trying to make.


It's like accessing fields on a struct, where you can only access the ones that are guaranteed to be valid. For instance, in existing Go, the struct method would be having two fields, * X and * Y, where (in theory) exactly one of them is non-nil, but both can still be accessed. This requires manually maintaining this invariant (and remembering to check when accessing), as well as sacrificing performance by allocating. Using an ADT would allow storing the X and Y by value, as well as guaranteeing that the invariant of "X xor Y" is always true.


Sounds like type-assertions cover that use case just fine. However, you are correct that you can't do this without allocating. ¯\_(ツ)_/¯


As people keep saying, type assertions do not have as strong compile time assurances as ADTs.


And as I keep saying: I don't value those assurances and in fact find them to be a net loss when accounting for breaking changes changes to clients.


You pay the price of getting the bugs and edge cases out regardless of whether you have compile time assurances.

The complexity is inherent and you can either choose to make it implicit and play the losing game of ensuring it doesn't fail with tests or make it explicit from the outset.


You personally not valuing the differences is very different to not having any differences, and the latter is what a lot of your comments seem to be implying, including the one I was replying to.


I'm sorry, i meant trying to unwrap to a type that's not even possible ( and forgeting to check for one that is). Sure, there are ways to dynamically ensure typecasts have succeeded.


Go will give you errors for "impossible type assertions" in many cases.


> The ok is optional, if omitted and the assertion fails, you'll get a runtime panic.

Great, now my enums can panic because I have to play "be the typechecker", instead of focusing on something actually important like business logic.

How is this not seen as universally a bad thing?


I don't understand the downvotes since that's about the official stance of the go team -- see the FAQ (cited somewhere else). Agree with it or not.


Not recommending either way, but if the language supported algebraic types with pattern matching, it would still be explicit.


Maybe you're visualising something different, but I don't see how ADT enums are implicit. Could you explain?


Might be relevant: https://golang.org/doc/faq#variant_types

Sometimes one wishes people from the go time would have spent some time hacking ML-derived languages.


I've always wanted to like Go, but every time I get ~1,500 lines in a project, I remember my pain points. I totally see why other people like the current version of Go, but as it stands, it's not an ideal match for my brain.

Dependency management is a big pain point for me. I'm really glad to see several of my pain points on the list for this year, including another look at generics.

Generics are genuinely tricky: They allow you to write many kinds of useful functions in a type-safe manner, but every known approach for implementing them adds complexity to the language. C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?

Anyway, it pleases me to see that the Go team is thinking hard about this stuff. At the bare minimum, I'm going to be contributing code to other people's open source Go projects for the foreseeable future. :-)


> C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?

That's exactly my hope. When i switched from Rust back to Go, i had a sigh of relief, i was able to prototype quickly and easily and my cognitive load felt much lower.

Strangely enough, this felt very similar to switching from NodeJS to Golang. In Node, i was constantly worried about what is async or sync and the dynamic nature of it made my code feel like the wild wild west. Both Rust and NodeJS put a lot of mental burden on me, in different ways - Go was definitely a sweet spot in both correctness and ease of use, and i hope they achieve that with Generics as well.

NOTE: My Rust programs felt vastly more secure than in go, and i miss that - that part was less cognitive load in favor of Rust. The struggle was mainly at the design phase, and i just wanted to mock up some code and types & borrowing posed many refactoring issues. I hope in the future strong Rust tooling will make refactoring a breeze.


> Generics are genuinely tricky: They allow you to write many kinds of useful functions in a type-safe manner, but every known approach for implementing them adds complexity to the language. C#, Java and Rust all bit the bullet and accepted (some) of this complexity. Maybe Go will find a sweet spot, preserving its simplicity but adding a bit of expressiveness?

I seriously doubt Go and the people who maintain it are going to do groundbreaking work in this area. It's and extremely developed area of language design (and still developing way ahead of where Go would ever go)

What "complexity" does simple parametric polymorphism (i.e. forall a) bring? The only thing I can think of is some extra syntax, which is far less complex than a codebase built upon a lack of parametricity. Hell, don't even allow user-defined parametric types and force everyone to stay with Go's parametric builtins but allow programmers to abstract over them. Seems like a no-brainer to me.


I think it is no surprise that one goal of Go is coding at large where large teams are involved. For single person projects that I think you are doing, many people want intellectually stimulating language where Go may fall short.


> I think it is no surprise that one goal of Go is coding at large where large teams are involved.

I don't buy that. There is no proof Go programming at large scales better than Java programming at large.

Go didn't reach the scale of Java programs yet. And no, Kub or Docker, while fairly large, are nothing compared to 15 y.o. multi-million line Java codebases. Go certainly needs less bureaucracy due to the ease of deployment, but it doesn't mean Go projects scale better in large teams.

> For single person projects that I think you are doing, many people want intellectually stimulating language where Go may fall short.

I really hate this kind of arguments. Features like generics aren't intellectually stimulating, they are here because people want to write type safe code. Context.Value(interface{})interface{} isn't type safe code.

Now tell me, what is more intellectually challenging? writing concurrent programs free of race conditions or generics?


> Go didn't reach the scale of Java programs yet. And no, Kub or Docker, while fairly large, are nothing compared to 15 y.o. multi-million line Java codebases.

I didn't claim Java projects do not scale. They do. I use Java all the time at my day job. I would use whatever makes sense. Maybe someday Rust is absolutely essential I would use it then.

> Features like generics aren't intellectually stimulating, they are here because people want to write type safe code.

When people need generics they can use languages with generics facility then. I am not arguing otherwise.


> When people need generics they can use languages with generics facility then. I am not arguing otherwise.

This is the point where Go's advocates disagree with its detractors. The latter tend to feel that you need generics (Just like you need to program in something higher level then assembly) far more frequently then they are told.


+1

I want intellectually appealing code, not intellectually torturous.


Posts like this really return me the confidence in the future of Go the language.

I very much wish Go to succeed, it's built on a few nice ideas, but where it currently is it has a number of usability impairments that stop me from wanting to work with it.

But I see that these impairments are seen as problems by key developers, and work is underway to eventually fix these problems. (And this is besides the "routine", incremental but very important improvements, such as GC or stdlib.)


> But I see that these impairments are seen as problems by key developers, and work is underway to eventually fix these problems.

What will inevitably happen is that Pike et al will argue that such things are merely problems because "you're doing it wrong" or "there's no way to do this without any tradeoffs of any kind" (generics), and ultimately very little will change.


Russ Cox, the guy who wrote this post, is the technical lead for the Go project. He authored more of Go's code base than anyone else (by a huge margin). His opinion about these issues holds more weight than pretty much anyone. I wouldn't downplay it.


The other alternative is that they allow them, and Go slowly loses its magic and becomes another Algol, C++, or Java.


God forbid it become as useful and ubiquitous as C++ or Java.


> Not enough Go code adds context like os.Remove does. Too much code does only

Well, the error interface is { Error()string } and gophers were told to use errors as values, not errors as type because supposedly "exceptions are bad". By providing context you are just re-inventing your own mediocre exception system. Why use errors as value at first place if you need context? just put exceptions in Go, therefore people don't need to use a third party library to wrap errors in order to trace the execution context.


I really don't miss having invisible control flow for expected conditions blowing up my programs with long stack traces.

There's a whole lot of space between "include useful context in errors" and "exceptions".

(And FWIW, Go does have exceptions, it just calls them panics, and has a culture not using them for "known knowns" error conditions.)


> I really don't miss having invisible control flow for expected conditions blowing up my programs with long stack traces.

As opposed to panics? Checked exceptions don't blow up in your face, you have to handle them. Nil errors and type errors might yet these also happen to Go. I see no difference with Java here. Go isn't better when it comes to error handling, in fact Go is extremely tedious when it comes to error handling.

> (And FWIW, Go does have exceptions, it just calls them panics, and has a culture not using them for "known knowns" error conditions.)

So Go has both(unchecked exceptions and errors as "value"), how does it make things better? it doesn't. If it did, the blog wouldn't be talking about people "handling errors the wrong way".


I've found it helpful to distinguish between errors that are program bugs and errors that are conditions of the outside world (network errors, invalid data, etc).

I think it depends somewhat on the kind of software you're writing.

Being very careful with errors is quite handy for a long-running server processes; maybe (maybe!) not so worth it for a program that starts and stops within the attention span of a single user.


errors that are program bugs and errors that are conditions of the outside world

That also happens to be the intended distinction between Java's checked and unchecked exceptions.


> you have to handle them

the definition of "handle" widely varies, to the point of making the exercise near meaningless.


> I really don't miss having invisible control flow for expected conditions blowing up my programs with long stack traces.

You mean outside of accidentally running into a nil, and having your program spontaneously abort?

Even worse, reliably testing for nil (`if (foo == nil)`) doesn't even save you because go's nils are typed. So `foo == nil` can actually return false even when foo is nil. Utter madness.


Running into a nil would be an "unexpected condition". In that case, an exception (panic, in Go lingo) is reasonable.

It is true that Go doesn't do as much as other languages to avoid nils. It's a small, unambitious language in some ways. (Like, it adds more typing than Python, not as much as rust or swift.)

    foo == nil
will always tell you the truth. The edge case that trips people up at first that I think you're thinking of is that storing a nil-pointer into an interface will not give you a nil interface:

     package main

    import "fmt"

    type Foo struct{}

    func (f *Foo) Do() { fmt.Println("do the foo", f) }

    type Doer interface {
        Do()
    }

    func main() {
        var foo *Foo = nil
        fmt.Println(foo == nil) // true
        var doer Doer = foo
        fmt.Println(doer == nil) // false, because doer is (nil,Foo)
        doer.Do()                // prints "do the foo <nil>"
    }
https://play.golang.org/p/Ul9cX34qfF

That's because an interface is a (pointer,type) pair, so even if the pointer is nil, the type being non-nil will make the pair non-nil.


I understand how and why this works. But the fact that you can write some form of

    var foo *Foo = nil
    var bar Bar  = foo
    
    foo == nil # true
    bar == nil # false
is, to be completely honest, completely ridiculous. It means you can perform a sanity check for nil values and still accidentally operate on a nil value.

Null references have been referred to by Tony Hoare as his billion-dollar mistake. We should not be reintroducing mistakes of this level of magnitude, and worse, compounding on them, in new languages invented with fifty years of hindsight.


It's because nil values aren't invalid the way they are in C. You can having a nil Object pointer but still call methods on it:

    func (o *Object) DoSomething() {
        if o == nil {
            fmt.Println("Nil implementation")
        }
        fmt.Println("Something with the members here")
    }
Whether or not this is a good idea would be a different discussion, but typed nil struct pointers don't automatically crash the way they do in C. Therefore, it is perfectly valid to have some interface implemented by what is a nil pointer to some object type. I've even used this a few times. nil only crashes when you try to write into a nil map and a few other operations. Even some operations you'd expect to crash are implemented in a way that they will not. For instance, you can append to a nil slice, and a new slice and underlying array will be allocated for you rather than crashing.

Go does not by any means solve the "billion-dollar mistake", and I'd still like to be able to put "non-nillable" on things, but it is less affected by it than C. (Which is damning with faint praise, certainly.)


You don't have to keep explaining the mechanism. I get the mechanism. But the fact that `foo == nil` can return false when foo is actually nil is indefensible. The fact that it crashes less than C does is not a defense here. Sorry.


But foo isn't nil.

I'll agree with the opinion that it can be confusing that an interface consists of two elements, the type and the value itself, and that the "== nil" check may not do what you initially expect, because the interface values are hidden below the surface of the abstraction the language provides. But it is not true to say that "the interface value is nil", because it factually isn't. To use psuedo-Go, since neither "Interface" nor the types are first-class values, Interface{ConcretePointerType, nil} does not and should not compare as equal to nil. There are values in memory corresponding to this interface value, and those values do not correspond to any interpretation of "nil" Go uses. We're talking about real numbers in real RAM here and a real specification of nil that corresponds to those real numbers in RAM; this is not a matter of opinion.

I don't know of any language that doesn't have a few dozen quirks of this sort buried in it. It can be pointed out as a legitimate criticism of the language, but it's not particularly a flaw relative to other languages. Either it's not possible to build a truly clean programming language that lacks this sort of quirk, or we're not very good at it.

Before replying to me with the language you believe lacks these quirks, please do me a favor and search for "$LANGUAGE quirk" and "$LANGUAGE gotcha" first, because that's the first thing I'm going to do. And it won't be a defense to me to explain how what you found is not really a quirk because you just have to properly understand the language, because that defense is true for this quirk of Go's too.


i think people get so tripped up by this because they expect equational reasoning to work:

   a = nil
   b = a
   // => b == nil, right? but no.
one of the devs on /r/golang posted he wished they had used a different keyword like "unset" to check for an empty interface, which would be much less intuitively confusing.

i agree all languages end up with quirks like this (can't get everything right without really using the language, and by then it's too late!), and in fact go is low on the quirk level in my opinion.

the other one i really wish i could change is that

    for i, x := range foo
doesn't give x a new binding each time through the loop. Confusing, and almost never the behavior you want.


    But the fact that `foo == nil` can return false when foo is actually nil is indefensible. 
You keep saying this, but it's not true.


Only because go has redefined the terms.

    …

    foo = nil
    bar = foo

    // this check may or may not evaluate as true
    if (bar == nil) {
       …
    }
Saying that `bar` here isn't actually equal to nil if it's an interface is tautological. It's not, but only because the language has defined this to be the case. Go could likewise define `1 == 2` to evaluate to true, and you could reuse the same semantic reasoning to defend it.

The point being argued is that it doesn't matter that go defines this to be the case, the point being argued is that it's surprising, frustrating, and can lead to bugs. Particularly when `bar` starts off as a pointer to a struct, but later is refactored to be an interface type. Code that used to work still compiles, but now encounters a runtime nil panic.


(See also my response to jerf, above.)

In Go (as in most programming languages) it's not true that the assignment "a = b" implies that "b == foo => a == foo", if a and b are different types.

For example, this C code will print "nope":

    float b = 3.5;
    int a = b;
    printf(a==3.5 ? "yup" : "nope\n");
So back to Go, sure, this is initially surprising, and most people get bit by it at first. Once you know about it, it's fine.

In practice I haven't encountered any refactoring bugs as you describe, though it is theoretically possible.

I personally don't see it as a big issue.

(In retrospect, Go could've help guide intuition better by using a new keyword like "unset" or something to test for the zero-value of interfaces, instead of overloading nil.)


> In Go (as in most programming languages) it's not true that the assignment "a = b" implies that "b == foo => a == foo", if a and b are different types.

While I haven't counted and compared, I suspect that in most programming languages, "a=b" being a non-error implies that either a and b are the same type and value, or that a and b are values that, if they are of different and comparable types, will compare equal.

There are certainly popular languages that do it the way you describe, but I don't think most languages do.


That's a fair point.


> In Go (as in most programming languages) it's not true that the assignment "a = b" implies that "b == foo => a == foo", if a and b are different types.

While this is true, in all of those other cases you have the benefit of compile-time type checking so cannot call a function on the wrong type.

With nil, you get runtime errors.


Go's nils aren't typed, it's just that an interface value can have a type and point at nil. Also, I've never run into this "problem" in all my time of using Go...


I think their views on this are changing from experience with the language. Part of the problem I think is that the way they wanted people to use the language isn't clearly explained on their posts about errors. Some how I read it over and over and I never quite get the gist of when they think I should create a custom error type or not.


Because Go considers errors to be values, and exceptions are control flow.


Errors must be handled, which means you always get control flow with them, even if it's just the primitive (if err != nil) -- in Go you just have to manually write the control flow instead of the language helping you.


Right - the control flow is explicit, and works the same as regular control flow.


But there is value in providing a different control flow for errors, which is why exceptions have become prevalent in most programming languages in the past decades.

The value is that your code's happy path is clean and not encumbered with error checks every ten lines, like we see in Go sources all the time.

Separating the happy path and the error handling in different sections of the code contributes to clean code, especially if exceptions are checked and cannot be ignored by the developer.


In my experience, separating the happy path from the error handling leads to unhandled errors. Just yesterday, one of our python projects (an API with about 600 endpoints) started barfing an error for non-parseable JSON because something at the top level caught it. I have no clue where it came from nor how to reproduce it. Had we been handling errors when and where they occur, I would have a vastly shorter debugging period in front of me.


> In my experience, separating the happy path from the error handling leads to unhandled errors.

No, the separation has nothing to do whether errors are handled or not.

If error handling is mandated by the compiler (e.g. checked exceptions), there will be no unhandled errors since the compiler will simply refuse to compile your code until you handle these errors.

Whether you handle these errors near the happy path or in a different section of the code is an orthogonal concern.


Or people will simply elect to use unchecked exceptions, and users will see stack traces frequently. In either case, Go programs don't seem to suffer from bugs in the error handling paths like Java and friends. YMMV.

I don't see the value in a clean happy path and a hidden error path.


> Or people will simply elect to use unchecked exceptions

Go already has unchecked exceptions. It's called "panic".


Ok? How does that relate to the discussion? What point are you intending to make?


It completely invalidates your argument. Your argument is that an error-handling system as described would be undesirable because it would result in developers misusing unchecked exceptions. Go already has unchecked exceptions in the form of `panic`.

So either developers are already abusing it, in which case go's current error-handling approach is no better than the one proposed according to your metric. Or developers have access to it but aren't abusing it, in which case there's no reason to expect they'd abuse it in a system where doing it the right way is even easier than it is today.

In my experience, users already do experience stack traces as it's embarrassingly easy to accidentally operate on a nil pointer.


I'm not sure whose argument that is, but it's not mine.


You realize people can simply scroll up to verify that, in fact, that is the exact argument you're making, right?


You're confused. My only point was that checked exceptions are only good when they're used. Better luck next time.


> Not enough Go code adds context like os.Remove does. Too much code does only

   if err != nil {
       return err
   }
Is anyone else surprised that forcing programmers to do the tedious, repetitive, and boring work of being a manual exception handler overwhelmingly results in people doing the least amount of effort to make it work?

I feel like so many of the headaches of go could have been avoided had the developers spent any time whatsoever thinking about the programmers using it.


I think that the Go team worked very hard to make something that programmers actually like using. Yes, it is annoying to do a lot of `check if err is nil`, but at the same time, exception handling is something that can be esoteric, whilst it's trivial to see what your example does.

I also feel like there has been a lot of emphasis put on keeping the APIs consistent, which is something that a lot of developers will tell you makes PHP a nightmare sometimes.


> Yes, it is annoying to do a lot of `check if err is nil`, but at the same time, exception handling is something that can be esoteric

People keep saying this, but the alternative doesn't have to be exceptions.

Rust strikes a great balance here. There's no nil, you must handle the Err case of a Result enum, or the None case of an Option enum (this is much nicer than go; in go after `val, err := failingThing()`, it's entirely possible to accidentally use `val` as a meaningful value, which is strictly impossible in Rust), and you can use functions like `map` and `and_then` on these types which has the benefits of explicit error handling without the absurd loss of readability caused by dozens of identical error handling clauses.

> I also feel like there has been a lot of emphasis put on keeping the APIs consistent

This isn't something special about go. This is a minimum requirement for a language nowadays. Rust, Ruby, Python, Swift, and others all do this.


Regarding error context: I'd advocate simple error-chaining using a linked list. If a function fails, it returns an error wrapping the underlying error as the cause, and so on up the stack. The top of the stack can inspect or print the error chain ("A failed because B failed because ..."), or pinpoint the error that was the root cause.

I would love for Go to include something like this:

    type Error struct {
        Description string
        Cause       error
    }
    
    func NewError(cause error, descriptionFmt string, args ...interface{}) error {
        return Error{
            Description: fmt.Sprintf(descriptionFmt, args...),
            Cause:       cause,
        }
    }
    
    func (me Error) Error() string {
        if me.Cause == nil {
            return me.Description
        }
        return fmt.Sprintf("%v: %v", me.Description, me.Cause.Error())
    }
    
    func RootCause(err error) error {
        if err, ok := err.(Error); ok && err.Cause != nil {
            return RootCause(err.Cause)
        }
        return err
    }



Great library.


or gopkg.in/errgo.v1 which is a bit more opinionated about error causes.


Many Go error libraries exist mostly to provide this kind of functionality.


> In the long-term, if we could statically eliminate the possibility of races, that would eliminate the need for most of the memory model. That may well be an impossible dream, but again I’d like to understand the solution space better.

Unless I'm mistaken, this is an impossible dream as long as shared memory exists. It's the core tradeoff that distinguishes the Erlang runtime from the Go runtime (there are others, but they all stem from this).

Your goals are either memory isolation for better distribution/concurrency/clustering/fault tolerance/garbage collection or shared memory for ability to work with large datasets more efficiently.

It's one of those details that changing it would essentially create a new language. You'd have code, packages and libraries that either worked that way or they wouldn't.

IMO, this is an area where Go gets into dangerous territory of trying to be all things to people. Be great at what you're good at which is the "good enough, fast enough, portable enough, concurrent enough, stable enough" solution for backend services in most standard web architecture.

If people need distributed, fault tolerant, isolated, race proof, immutable run times that aren't quite as top end fast and aren't ideal for giant in RAM data structures...there's already a well established solution there by the name of Erlang (and Elixir). They made the tradeoffs already so you don't have to reinvent them.


> Your goals are either memory isolation for better distribution/concurrency/clustering/fault tolerance/garbage collection or shared memory for ability to work with large datasets more efficiently.

The isolation is not physical, but logical. Implementations are free to use zero-copying and make everything just as efficient. Theoretically compiler could even optimize message passing overhead away for systems with shared memory in some cases. The opposite is also true, shared memory is also logical and there is a lot of room for a lot of clever things, like eliminating races.


How can you eliminate races with shared memory in a concurrent environment?


Look into Pony's reference capabilities. To put it shortly, the compiler will guarantee that there are no data races through this system, and therefore the runtime is able to use shared memory and pointers for efficiency.


Rust's "sharing xor mutability" guarantee eliminates data races by (usually statically but also dynamically) ensuring that a memory location won't be mutated when multiple places/threads can access it. Of course, the machinery required for this in Rust is relatively complicated (e.g. as a side-effect, it allows one to also forgo a GC without losing safety).


How is that different from a mutex lock?


Rust lets you be precise about placing the locks exactly where they are needed, with compile time errors if a lock isn't locked when accessing data, or a lock is missing. Fixing data races with locks for arbitrary code, without more static guarantees in the language, essentially means locks have to be held for most memory accesses: it's hard/impossible to tell if that location will be accessed concurrently. (This is why CPython has a GIL: it's easier to only let one piece of code execute at a time than to handle data races in arbitrary code.)

https://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.h... is interesting reading.


For example, in Rust, it will stop you from accessing the value protected by a Mutex after the lock is freed, and do so at compile time, even through function calls, pointers hidden in data structures, etc.

Another example: I can take a value and mutate it locally, then pass it immutably to a number of threads which use it to compute some results, and once they are all complete, I can mutate the data structure again. If I try to write mutation code before all the threads are re-joined, the compiler will tell me that I can't mutate while there are outstanding immutable borrows. At compile time, at no runtime overhead or locking.


It's being done at compile time.


Excellent read. Official package management looks more about 'when' than 'if' now. As someone in Java world who did not graduated to Maven/Gradle and stuck to ANT I hope it will be minimalistic and immediately useful to Go users.


After three years of rogue Java development, maven was an epiphany to me. At the beginning it has a very, very shallow learning curve.


> all the way up the call stack, discarding useful context that should be reported (like remove /tmp/nonexist: above).

It's simple. With exceptions, we got used to "errors" that are, by default, debugable. But Go got rid of default debugable errors, and programmers are lazy.


The problem isn't the methodology. Its the library support. I (for my employee) wrote a library a few years ago (~go 1.1) with "errs.New" and "errs.Append(err, ..work like fmt..)" that generates error that look like:

  main.go:13 main.main(): highest level error;
    Details: foo.go:8 foo.ExportedMethod(): mid level error;
    Details: foo.go:42 foo.innerMethod(): low level error!
It provides handy helpers like GetRootErr and PanicToError too. I hope we can opensource it in the next month or two.


It's not that it can't be done, it's that it's not in the stdlib. Without being in the stdlib, each set of code solves the same problem in its own way harming conformity. It's gotten so bad that some libs include stack traces and some even parse panics. For example, how do I write a function that loops over the error chain if it may have come from several libs with different impls?

The problem is they shot themselves in the foot and may have to incorporate a form of "default" method to get around it like Java ended up having to. You can't have a "ContextualError" type because what does "Cause()" return? If it returns "errors.Error" then you require users to use a type assertion. If it returns "ContextualError" then it can't chain existing errors without wrapping. They also can't add anything to the "errors.Error" interface because they would invalidate all existing impls of that interface.


Python suffers immensely from this - libraries tend to reraise exceptions with a new, descriptive type, but usually don't append the stack, and essentially can't chain any custom data you may have added. Not having the complete causal chain can seriously hamper debugging, and unless it's built in it's unlikely to be adopted consistently.

Python 2 has nothing, and libraries haven't agreed on anything. Python 3 has "raise ... from ...", which largely resolves it, but I'm not sure what adoption is like. Anyone know?


There's already a good library like that: https://github.com/pkg/errors

It is API-compatible with the errors package, and you get errors.Wrap(err, message), errors.Wrapf(err, message, args) and errors.Cause(err).


We wrote ours around the time Go 1.1 was released so its way older and featureful than that package.


Yup, my solution as well. As well as a couple other common libraries (eg: https://github.com/pkg/errors)

The only downside to this is you can no longer treat errors as values. Eg, you can't do:

  err := Func()
  if err == ErrBadThing {
    // do stuff
  }
So you have to implement some type of `Cause()` method. In my lib, i have `errors.Cause()` which returns the root error, as well as a shorthand func `errors.Equals(err, ErrBadThing`)`


I really enjoyed reading this thoughtful post. He addresses a lot of the pain points I've encountered when writing Go.

I know there probably won't be immediate fixes, but it gives me confidence in Go's future.


"I don’t believe the Go team has ever said “Go does not need generics.” "

I think that's true, but I do think its been said by a number of Go users and advocates, which is where the perception comes from.


Meanwhile I've said "even if they don't introduce user defined generics it would be nice if they fixed the language defined ones"

The builtin generics are a mess. At this point I don't trust the go team to implement any more complicated generic system.


> The builtin generics are a mess.

Could you elaborate on this? I must not have used arrays, slices, maps, or channels enough to find that there's some common issue with all of them, other than that the user can't write her own functions generic over them.


The most obvious problem is that variance doesn't work.

You cannot pass a []string to something that expects a []interface{} for instance, even though you can put every item in the first in an instance of the second.


The go vet integration with go test looks interesting. I'm currently using github.com/surullabs/lint [1] to run vet and a few other lint tools as part of go test. It provides a nice increase in productivity for my dev cycle. Having vet + other lint tools integrated into my dev+test cycle has caught a number of bugs before they hit CI.

[1] https://github.com/surullabs/lint

Disclaimer: I'm the author of the above library.


I would really like to see best practices in the documentation on how to include the right amount of error context, as mentioned in the article.

Also what to put and not put in context objects is really important to document as it could easily snowball into a catch-all construct and be totally misused after a while.


> Test results should be cached too: if none of the inputs to a test have changed, then usually there is no need to rerun the test. This will make it very cheap to run “all tests” when little or nothing has changed.

I'll be curious to see how this pans out, because it sounds like a very deep rabbit hole. Is there any precedent for this in other language toolchains? I've seen some mondo test suites in Java that could desperately use it.


Gradle does it, although only at the task level. In fact, Gradle does this for all tasks, not just running tests, so re-running a command is always as cheap as possible.

Well, with some caveats. Firstly, tasks have to be written to support this mechanism, and although the built-in tasks are, not all third-party ones are, and those will always be re-run. Secondly, if a test task fails, it will be re-run. That makes it easy to re-run tests which failed for extraneous reasons.

In your mondo case, to take advantage of this, you'd want to break your test suite up into multiple tasks. You already get a task per subproject, but you could easily define multiple tasks per subproject. I've often had separate tasks for unit tests, integration tests, and browser tests.

Make also does this if you describe tests as a rule to create a test report.


https://bazel.build/ supports this. Bazel is Google’s build system, open-sourced.


From reading https://bazel.build/versions/master/docs/test-encyclopedia.h... , it looks like Bazel's rules are quite strict (not to mention Unix-specific), and requires writing a separate rules file to configure test behavior (e.g. an "external" tag to denote tests that cannot be cached). I anticipate that it would be difficult to retrofit such things onto Go's existing ecosystem in-place (or any language's, for that matter), especially without sacrificing Go's minimalist philosophy (i.e. "this function is a test if its identifier begins with `Test`, end of story"), especially considering Go's documented aversion to config files WRT package management.

As an optional external tool it may work quite nicely, however.


Bazel's rules are extremely strict. For what it's worth when I was at Google all the Go code was built with Bazel. Most of the google engineers don't experience this pain internally I imagine.


I'm not sure that operates at the same granularity though.


Well, for one thing, the Go compiler itself already only compiles things that have changed, at least as long as you are doing `go build -i` and such. Only test what you had to compile seems like a pretty easy thing to do.


Happy to see that being able to not use GOPATH is at last considered seriously! During years, Go people wanted to force people to work their way. We can still this state of mind in the associated bug report: https://github.com/golang/go/issues/17271.


> it would be nice to retroactively define that string is a named type (or type alias) for immutable []byte

Perhaps an array is better than a slice, so `immutable [...]byte` Also, the for-range loop would have to behave differently so I guess it's a version 2 change. And if semantics are changing anyway, I'd prefer a `mutable` keyword to an `immutable` one.


> Perhaps an array is better than a slice, so `immutable [...]byte`

Slice is the right choice, I think. Were they to choose array as the alias, the language would need to "bubble up" the "array length" type parameter to the string type, which would make strings of different byte lengths incompatible types. But hey, Go is already pretty clearly inspired by Pascal, so maybe we'll see that after all :-)


Re dependencies:

Glide takes our team 90% of the way, but is a bit glitchy (need to wipe ~/.glide sometimes) & lacks a command for `npm link` type functionality.


Great !


It is nice that Go is trying to learn from Pony and Midori. I wonder whether any Gophers have started to learn about object-capability theory and the reasoning behind why so many values are immutable in Pony, Midori, Monte, and other capability-safe languages.

To expand on this a bit, in ocap theory there is a concept of "vat", an isolated memory space for objects which has its own concurrent actions isolated from all other vats. In a vat model, data races are nearly nonexistent; in order to race, one would have to choose a shared variable in a single vat, and then deliberately race on it. But this is not common because ocap theory enforces heavy object modularity, and so shared variables are uncommon.

Additionally, a "deliberate race" is quite tricky. Vats prepare work with a FIFO queue. In the Monte language:

  # A flag. We must start it as `false` because Monte doesn't allow uninitialized names.
  var flag :Bool := false
  def setFlag(value :Bool) :Void:
    flag := value
  # Set the flag to `true` immediately.
  setFlag(true)
  # Set the flag to `false` on the next turn.
  def first := setFlag<-(false)
  # Set the flag to `true` on the next turn.
  def second := setFlag<-(true)
  # And finally, when those two actions are done, use the flag to make a choice.
  when (first, second) ->
    if (flag) { "the flag was true" } else { "the flag was false" }
You might think that this is non-deterministic, but in fact the delayed actions will each execute in order, and so the flag will be set first to `false` and then to `true`.


[flagged]


Well, it's been pretty productive for me. It's also been productive for Docker, InfluxDB, and Kubernetes (among other prominent, complex, open source projects).


Yeah it has also been productive for me. And we cannot forget Hashicorp's use of it for a lot of stuff.


Your comment will hardly lead to productive conversation.

The fact is a lot of people are perfectly fine with current Go capabilities and prefer writing useful software solution as compared to debating on PL theories and community agendas.


> The fact is a lot of people are perfectly fine current Go capabilities and prefer writing useful software solution as compared to debating on PL theories and community agendas.

How many? how many didn't use Go because it lacked of a sane type system? because Go relies way too much on runtime behavior (AKA type switches and reflection) to be called a modern statically typed language. In all my time using Java I never had to cast something once or use reflection, or to do a type assertion, all these are common practice in Go, especially when the std lib is now getting API like Context.Value(interface{})interface{}.

This isn't PL theory, these are flaws that aren't going anyway and will become more apparent as users will have to maintain all that "productive Go code" written 5/10 years ago. With Go the developers basically do the compiler's job manually. These problems are absolutely not going away, and people will write about how they ditched Go for these reasons.


You may not have used reflection in Java but we have a very large application in production which is heavily dependent on reflection. It is in use for many years and serves business purpose just fine. Could there be a better way? maybe, but a working solution now is better for our business than promised solution in remote future.


I think the striking point is that you're forced to use a throwaway to have an opinion.


I agree with you about Go's flaws. However, if you have never used a cast or reflection in Java, you have either not done much Java, or you have used libraries and frameworks that do the casts and reflection for you.


Absolutely. Many people will decide not to use Go for these reasons.

But others will come to the conclusion that things like complex name lookup rules, or generally too many possible interpretations of individual syntactical expressions, cause hugely greater mental load than any of Go's shortcomings.

That said, I have a lot of complaints about Go too. Especially error handling.


Almost every dependency injection framework in Java uses reflection extensively. And nearly every java project I've ever seen used one of them. Your claim that you didn't have to do reflection may be correct but I'm betting you were consuming code that used a lot of reflection.


Reflection is pretty predominant in java, even if you don't use it yourself. Many frameworks (like spring) rely on reflection to do their magic.


> The fact is a lot of people are perfectly fine with current Go capabilities and prefer writing useful software solution as compared to debating on PL theories and community agendas.

Citation needed. Has anyone actually done any sort of survey of users or potential users of Go to see what the opinion is regarding generics? How many would prefer Go with generics to Go without (I use Go almost exclusively at work, and would really appreciate generics, so there's at least one)? How many would prefer Go to not have generics? How many are ambivalent? If there is, could you please cite it?

I'm kind of frustrated by phrases like this being bandied about without evidence to back it.


You can certainly put some effort to create exhaustive surveys and ask people to take it.

For me existence of lot of software which are build with Go like Docker/Kubernetes and http based tools I develop and use it internally at work is good enough to assume Generics are not that important.


You'll never get that evidence; the type of people who are willing to fill out a survey are not the same type of people who don't care and are just Getting Things Done.

FWIW internally the Go team has a survey of all Googlers each year about why they are or are not using Go, but I am guessing even then getting responses about why they aren't is hard to come by.

I personally have other axes to grind that are much higher on my list (like os.File not being an interface) than generics.


Perhaps not in the short term, but I hope the comment does lead to some benefit in the long term, despite semi-religious attachment of some to a particular technology. Even the most flippant of downvoters will perhaps remember these words months or years into the future.


I don't use or particularly like Go, but I don't see what use is supposed to come from this. These comments aren't reasoned criticism — they just come across as bashing and sneering, as though you have some kind of grudge against a programming language. If you actually mean to contribute to the discussion, I think you might benefit from taking a few steps back.


No sneering involved at all, I assure you! Just an interest in programmer productivity, which I see as genuinely helpful.

Perhaps you see all languages as roughly equivalent and that is the cause of our differing opinions. In any case I've known quite a few engineers who have really drunk the koolaid, invested heavily in Go, become zealots for of the language and suffered for it before moving to options like Rust, Elixir, Kotlin or even Java or C++.

I'm just trying to prevent some of that suffering going forward. Nothing more nefarious than that!


Nah. Your post will be forgotten. You repeat the same tired old points - Go isn't Haskell/C++/Rust. That's correct, it's not.

Those complaints have been said a thousand times already, will no doubt will be said a thousand times more. And all will be ignored, as they should.


People generally need to see a message many times before it sinks in, especially if it's one they disagree with. Perhaps for you, there remain many more before any benefit is accomplished. That's okay.

It's also possible that your needs align particularly well with those of those who control the project. That's okay, too!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: