Hacker News new | past | comments | ask | show | jobs | submit login
The Next Step for Generics (golang.org)
673 points by ainar-g 17 days ago | hide | past | web | favorite | 664 comments



Looks like a big step in the right direction. The biggest pain is that methods cannot contain additional types, which prevents the definition of generic containers with a map method like

    func (c Container(T)) Map(transform(type V)(T) V) Container(V)
if you want to see what `Result(T)` and `Option(T)` look like under this proposal, check out a very stupid implementation here https://go2goplay.golang.org/p/dYth-AQ0Fru It's definitely more annoying.

But, as per https://go.googlesource.com/proposal/+/refs/heads/master/des... it looks like maybe they will allow additional type parameters in methods, which would be REALLY nice.


There's no need to use the pointer in there; you could just use an ok bool instead (saving the indirection): https://go2goplay.golang.org/p/4hr8zINfRym

I think it's interesting to observe that using the Result type is really not that much different from using a multiple return value - it's actually worse in some ways because you end up using "Unwrap" which can panic.


I like your solution, although it's a tradeoff when the value type is some kind of nested struct whose zero value could be expensive to create.

Agreed that multiple return is actually quite nice in many situations, although `Unwrap()` is generally only called after checking `OK()` and the real benefit is in using `Map()`.

    a, err := DoCalculation()
    if err != nil {
      return err;
    }
    b, err := Transform(a)
    if err != nil {
      return err;
    }
is a lot more verbose and harder to read than

    a := DoCalculation()
    b := ResultMap(a, Transform)


I happened upon this comment a bit late, and caveat I'm not really a software engineer, but this comment made me think of something...

I've written a decent amount of code professionally in the first and second styles.

I sincerely prefer the second style, for reasons that are hard to articulate, maybe the aesthetic, the space, or something that I assume is equally as questionable.

After I stopped writing a lot of code, when I got pulled back in to debug or provide context long after the fact, in both cases, it was way easier in "real" code bases, to catch myself back up on things in the case that the code was of style 1, than when it was of style 2!

I may be alone in this sentiment, and I even regret that I have it.

I think there is also a bit of satisfaction in writing the code in the second example, especially when the example is a lot more complex than the one provided here.

Maybe it comes down to how much re-reading/maintenance your codebase actually needs. I wonder if coding convention/style has been mapped to the types of problems that code creates and the subsequent repair of those problems being dependent on the code style... I'm sure if I google it I'll find something :)


This should be the top comment: constructive criticism with a working example of how a simple sum-type-Rust-like-thing would be implemented.


You can improve the non-method syntax slightly by using option.Map in Go style. But it still isn't as nice as a method.

I think type parameters of methods will definitely get added since it's so standard and if anything more confusing to not have them.


Yeah, it's a little nicer that way, but methods would be ideal IMO.


I like generics for collections but that is about it. I've seen "gifted" programmers turn everything into generic classes and functions which then makes it very difficult for anyone else to figure out what is going on.

One reason I like go is it doesn't have all the bells and whistles that give too many choices.


Here's some generic code I wrote in Rust recently. I had two unrelated iterators/collections and I needed to flatten and sort them into a single buffer.

    struct Data {
        idx: usize
        // ... 
    }
    
    struct Foo {
        stuff: Vec<Vec<Data>>
    }

    
    struct Bar {
        stuff: Vec<Data>
    }


    fn flatten_and_sort (foo:Foo, bar:Bar) -> Vec<Data> {
        let mut output = 
            foo.stuff.into_iter()
               .flat_map(|v| v.into_iter())
               .chain(bar.stuff.into_iter())
               .collect::<Vec<_>>();

        output.sort_by(|lhs, rhs| lhs.idx.cmp(&rhs.idx));
        output
    }
Now you could argue that this is just "generics for collections" but the way those iterator combinators are written make heavy use of generics/traits that aren't immediately applicable for collections. Those same combinator techniques can be applied to a whole host of other abstractions that allow for that kind of user code, but it's only possible if the type system empowers library authors to do it.


You can actually use the power of generics to get rid of the two inner calls to `into_iter()`:

    let mut output: Vec<_> =  
        foo.stuff.into_iter()
            .flat_map(|v| v)
            .chain(bar.stuff)
            .collect();


I believe you can just use `.flatten()` instead of `.flat_map(|v| v)`:

    let mut output: Vec<_> =  
        foo.stuff.into_iter()
            .flatten()
            .chain(bar.stuff)
            .collect();
I might make it slightly more clear what the intent is by doing this:

    let mut output: Vec<_> =
        Iterator::chain(foo.stuff.into_iter().flatten(), bar.stuff)
            .collect();
But still basically the same.


Yes, functional programming patterns are nice. It's possible to write that even more concisely in JavaScript.


I've certainly encountered codebases that abused inheritance and templates/generics to the point of obfuscation but you can abuse anything really. Besides in my experience the worst offenders where in C++ where the meta-programming is effectively duck-typed. Trait-based generics like in Rust go a long way into making generic code readable since you're always aware of what meta-type you're working with exactly.

I definitely don't use generics if they can be avoided, and I think preemptive use of genericity "just in case" can lead to the situation you describe. If I'm not sure I'll really need generics I just start writing my code without them and refactor later on if I find that I actually need them.

But even if you only really care about generics for collections, that's still a massive use case. There's a wealth of custom and optimized collections implemented in third party crates in Rust. Generics make these third-party collections as easy and clean to use as first party ones (which are usually themselves implemented in pure Rust, with hardly any compiler magic). Being easily able to implement a generic Atomic type, a generic Mutex type etc... without compiler magic is pretty damn useful IMO.


What's wrong with this?

  class Result<T>
  {
    bool IsSuccess {get; set;}
    string Message {get; set;}
    T Data {get; set;}
  }
On many occasions, I like using result types for defining a standard response for calls. It's typed and success / fail can be handled as a cross-cutting concern.


That's a generic container of 0 or 1 elements ;)

It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common. Nothing prevents the user from attempting to retrieve the data without first retrieving the success status.

On the other hand, this improves on it dramatically:

    enum Result<T, E> {
      Success(T),
      Failure(E)
    }


I'm convinced that lack of Sum Types like this in languages like Java/C#/Go are one of the key reasons that people prefer dynamic languages. It's incredibly freeing to be able to express "or". I do it all the time in JavaScript (variables in dynamic languages are basically one giant enum of every possible value), and I feel incredibly restricted when using a language that requires a class hierarchy to express this basic concept.


I completely agree. Every passing day I become more convinced that a statically typed language without sum types or more broadly ADTs is fundamentally incomplete.

The good news is that many languages seem to be cozying up to them, and both the JVM (through Kotlin, Scala, et all) and .net (through F# or C# w/ language-ext) support them.

Even better news is that the C# team has stated that they want to implement Sum Types and ADTs into the language, and are actively looking into it.


I just don't see, in properly designed code, that there would be that much use for sum types if you have generics. When are you creating functions take or return radically different types that need to be expressed this way?

I dislike dynamic languages where parameters and variables can take on any type -- it's rarely the case that same variable/parameter would ever need to contain a string, a number, or a Widget in the same block of code.

I find it much more freeing to have the compiler be in charge of exactness so I can make whatever changes I need knowing that entire classes of mistakes are now impossible.


> When are you creating functions take or return radically different types that need to be expressed this way

Let's say you're opening a file that you think is a CSV. There can be several outcomes:

- the file doesn't exist

- the file can't be read

- the file can be read but isn't a valid CSV

- the file can be read and is valid, and you get some data

All of these are different types of results. You can get away with treating the first 3 as the same, but not the last. Without a tagged union, you'll probably resort to one of a few tricks:

- You'll have some sort of type with an error code, and a nullable data field. In reality, this is a tagged union, it's just that your compiler doesn't know about it and can't catch your errors.

- you'll return an error value and have some sort of "out" value with the data: this is basically the same as the previous example.

- you'll throw exceptions, which usually ends up with people writing code that forgets about the exception because the compiler doesn't care about it, and the code works 99% of the time until it completely blows up.


If you want to force people to handle the above 3 cases, couldn't you just throw separate checked exceptions (eg in Java)? In that case the compiler does care about it. You can still catch and ignore but that imo is not a limitation of the language's expressiveness.


Checked exceptions would have been an ok idea if it weren't for the fact that at least when I was writing Java last (almost 10 years ago) they were expressly discouraged in most code bases. Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them. Partially because the JDK itself abused them in the early days for things people had no hope of handling properly.

They also tend to defer handling out into places where the context isn't always there.

The trend in language design does seem to be more broadly away from exceptions for this kind of thing and into generic pattern matching and status result types.


> Checked exceptions would have been an ok idea

> Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them.

After quite a while of thinking this way, I came to the conclusion that:

95% of the time, there's no way to 'handle' an error in a 'make it right' sense. Disk write failed? REST request failed? DNS lookup? There usually isn't an alternative to logging/rethrowing.

When there is a way to handle an error (usually by retrying?), it's top level anyway.

Furthermore, IO is the stuff that can just 'go wrong' regardless of how good the programmer is, and IO tends to sit at the bottom in most Java programs. This means every method call is prone to IOExceptions.


Yes, after a few years of Java we all end up there. Frankly it's a good argument for the Erlang "Let It Crash" philosophy.

https://verraes.net/2014/12/erlang-let-it-crash/

If IOException on a read is truly happening, and it isn't just a case of a missing file, there are serious issues that aren't going to be fixed with a catch-and-log, or be able to be handled further up the call stack.


One benefit I've found with error-enums is just being aware of all the possible errors that can occur. You're right: 95% of the time you can't do anything except log/retry. But that 5% of the time become runtime bugs which are a massive pain. It's really nice when that is automatically surfaced for you at development time.


Note that checked exceptions are essentially the same thing as returning a tagged union, from a theoretical perspective at least.

They're not popular in Java though, because the ergonomics is a lot worse than working with a Result type.


Honest question: do you think this kind of stuff is going to be adopted by the majority in the next decade or two? Because I'm looking at it and adding even more language features like that seems to make it even harder to read someone else's code.


um... you realize the parent post is talking about having sum types in statically typed languages (eg. rust), when you already do this all the time in dynamic languages like javascript and python right?

So, I mean, forget 'the next decade or two'; the majority of people are doing this right now; python and js are the probably the two most popular languages in use right now.

Will it end up in all statically typed languages? Dunno; I guess probably not in java or C# any time soon, but swift and kotlin support them already).

...ie. if your excuse for not wanting to learn it is that it's probably an edge case that most people don't have to care about now, and probably never will, you're mistaken I'm afraid.

It's a style of code that is very much currently in use.


Are the majority actually writing code like this though? In the case of dynamic languages, this property seems more like an additional consequence of how the language behaves. It's not additional syntax.


> Are the majority actually writing code like this though?

Yes.

For example, some use cases: https://www.typescriptlang.org/v2/docs/handbook/unions-and-i...

This sort of code is very common.

I really don't know what more to say about this; if you don't want to use them, don't. ...but if your excuse for not using them is that other people don't, it's wrong.


Because even with generics, you are not able to express "or"; two different choices of types that have _different_ APIs. With generics, you can express n different choices of types that have all the _same_ API.

It's a good software engineering principle to make control and data flow as streamlined as possible, for similar data. Minimize branches and special cases. Generics help with this, they hide the "irrelevant" differences, surfacing only the relevant.

On the other hand, if there are _actually_ different cases, that need to be handled differently, you want to branch and you want to express that there are multiple choices. Sum types make this a compiler-checked type system feature.


Let's take Rust's hash map entry api[0], for example. How would you represent the return type of `.entry()` using only a class hierarchy?

    let v = match map.entry(key) {
        Entry::Occupied(o) => {
            o.get_mut() += 1;
            o.into_mut()
        }
        Entry::Vacant(v) => {
            update_vacant_count(v.key());
            v.insert(0)
        }
    };
I view sum types as enabling the exact same exactness as you describe in your last line; especially since you can easily switch/match based on a specific subtype if you realize you need that, without adding another method to the base class and copying into the x subclasses that you have for implementing the different behavior.

[0]: https://doc.rust-lang.org/std/collections/hash_map/enum.Entr...


Rust has both generics and sun types, and benefits enormously from both.

And sum types aren’t for “radically different types”. You can define an error type to be one of different options (I.e. a more principled error code), or to represent nullability in the type system, or to indicate fallibility without relying on exceptions, etc.

Rust uses all of these to great effect, and does so because these sum types are generic.


> I'm convinced that lack of Sum Types like this in languages like Java/C#/Go are one of the key reasons that people prefer dynamic languages.

It doesn't hurt that static languages (TypeScript) or tools (mypy) that lightly lay on top of dynamic languages often do support sum types.


> It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common.

uh, you'd never get a null-pointer exception in C++ given the type that OP mentioned. Value types in C++ cannot be null (and most things are value types by a large margin).


> Value types in C++ cannot be null

They can just not exist. And C++ being C++, dereferencing an empty std::optional is UB. In practice this particular UB often leads to way worse consequences than more "conventional" null-pointer derefs.


Then write your own optional that always checks on dereference or toggle whatever compilation flag enables checking in the standard library you are using.


Instead you can have undefined behaviour in C++.

Don't think get;set is C++, though it breaks encapsulation.


You can also constrain a generic type only to value types in C#:

  class Result<T> where T: struct
  {
  ...
  }
In that case it can't be null with C# either.


Then you can't construct it unless it's successful, no?

A Result<T> that can only contain successful values doesn't seem very useful


You can, it's possible to address "missing values" with a default construct. Example:

  int x = default; // x becomes zero
  T x = default; // x becomes whatever the default value for struct is


Then we're back to accessing that value being an enormous footgun, yes?


Then you can't construct it unless it's successful, no?

A Result<T> that can only contain successful values doesn't seem very useful


No, you just are forced to use methods like foo.UnwrapOr(default_value) to get the Result. Or depending on the language, you get a compile error if you don't handle both possible values of the Result enum in a switch statement or if/else clause.

See for example https://doc.rust-lang.org/std/result/enum.Result.html#method... in rust, https://docs.oracle.com/javase/8/docs/api/java/util/Optional... in Java, and https://en.cppreference.com/w/cpp/utility/optional/value_or in C++.


Who are you replying to? Is any of your elaboration related to this result type?

    class Result<T>
    {
      bool IsSuccess {get; set;}
      string Message {get; set;}
      T Data {get; set;}
    }


Ah you're quite correct.


Yes you can? The equivalent type in C++ is std::expected[1] which doesn't even contain a pointer that could be dereferenced (unless T is a pointer obviously).

[1] unfortunately not standardized yet https://github.com/TartanLlama/expected


Who are you replying to? Is it in any way related to the original comment I replied to and this type?

    class Result<T>
    {
      bool IsSuccess {get; set;}
      string Message {get; set;}
      T Data {get; set;}
    }


I am replying to you and its pretty obviously related to your comment.

You: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."

jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."

You: "Then you can't construct it unless it's successful, no?"

Me: "The equivalent type in C++ [to what the OP mentioned] is std::expected". It is not possible to get a null-pointer exception with this type and yet you can construct it.


It sounds quite a lot like you took the type the OP posted and changed it in your reply to a different type that isn't standardized yet, do I have that right?


The code the OP posted is not C++. If you translate it to C++ there is no way to get a null pointer exception.

It sounds quite a lot like you're looking to be pointlessly argumentative, do I have that right?


There are two things being discussed in this thread.

1. The first, my original point was that a high quality type system enforces correctness by more than just having generics. There's no proper way in C++ to create this class and make a sum type - there's no pattern matching or type narrowing like can be done in languages with more interesting type systems and language facilities. Generics is just a first step to a much more interesting, safer way of writing code.

2. The second, my replies to folks who have corrected me, and I'll borrow your little paraphrase here:

> [Me]: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."

>

> jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."

>

> [Me]: "Then you can't construct it unless it's successful, no?"

I think this is exactly correct still. If it's impossible to create an instance of Result<T> without... well, a successful result, you may as well just typedef Result<T> to T, right? If it can't store the "failure" case, it's totally uninteresting.

If it _can_ store the failure case, making it safe in C++ is fairly challenging and I dare say it will be a little longer and a little less safe than a Result I can write in a few lines of TypeScript, Scala, Rust, an ML or Haskell derivative, and so on.

Now, I'd love to be proven wrong, I haven't written C++ for a while so the standard may have changed, but is there a way to write a proper enum and pattern match on the value?

It looks like this std::expected thing is neat, but can a _regular person_ write it in their own code and expect it to be safe? I can say with certainty that I can do that for the languages I listed above and in fewer than 10 lines of code.

The C++ answer to that is, well, this:

https://github.com/TartanLlama/expected/blob/master/include/...

I don't think it's a comparison.


> The C++ answer to that is, well, this:

the linked link has a ton of things that are "quality-of-life" things. For instance comparing two Result values efficiently (you don't want to compare two Result<T> bitwise, and you don't want the "is_valid" flag to be first in the structure layout to fallback on the automatic default of lexical order as that would sometimes waste a few bytes, but you want the "is_valid" flag to be the first thing being compared for instance. Do you know of a language that would do that automatically ?).

It also supports back to C++11 and GCC4.9 with various fixes for some specific compiler versions's bugs, supports being used with -fno-exceptions (so a separate language than ISO C++) - sure, today's languages can do better in terms of prettiness, but so would a pure-C++20 solution that only needs to work with a single implementation.

If you are ready to forfeit some amount of performance, for instance because you don't care that the value of your Result will be copied instead of moved when used in a temporary chain (e.g. `int x = operation_that_gets_a_result().or_else([] (auto&& error) { return whatever; });` 3/4 of the code can go away (and things will still likely be faster than most other languages).


Well, T can be a pointer / reference here.


That wouldn't change anything to Result<T>'s implicit safety properties. "safe + unsafe == unsafe" - to have a meaningful discussion we should focus on the safe part, else it's always possible to bring up the argument of "but you can do ((char*)&whatever)[123] = 0x66;"


With c# 8 you have nullable references and you can use the compiler to guard you against null pointer exceptions.


> That's a generic container of 0 or 1 elements ;)

Then chances are so are most if not all of the uses of generics OP criticises. The only "non-container" generics I can think of is session types where the generic parameter represents a statically checked state.


Result types are much better than multiple return values. But now the entire Go ecosystem has to migrate, if we want those benefits (and we want consistent behavior across APIs). It'd be like the Node.js move to promises, only worse...


I'm not sure why you'd use a class like this in Go when you have multiple returns and an error interface that already handles this exact use case.


Because multiple return values for handling errors is a strictly inferior and error prone way for dealing with the matter.


    func foo() (*SomeType, error) {
        ...
        return someErr
    }

    ...
    result, err := foo()
    if err != nil {
        // handle err
    }
    // handle result
vs

    type Result struct {
        Err error
        Data SomeType
    }

    func (r *Result) HasError() bool {
        return r.Err != nil
    }

    func bar() *Result {
        ...
        return &Result { ... }
    }

    ...
    result := bar()
    if result.HasError() {
       // handle result.Err
    }
    // handle result

I'm not really sure I see the benefit to the latter. In a language with special operators and built-in types it may be easier (e.g. foo()?.bar()?.commit()), but without these language features I don't see how the Result<T> approach is better.


Go can't really express the Result<T> approach. In Go, it's up to you to remember to check result.HasError(), just like it's up to you to check if err != nil. If you forget that check, you'll try to access the Data and get a nil pointer exception.

The Result<T> approach prevents you from accessing Data if you haven't handled the error, and it does so with a compile-time error.

Even with Go's draconian unused variable rules, I and my colleagues have been burned more than once by forgotten error checks.


there are linters that will help you with that.

https://github.com/kisielk/errcheck

https://golangci-lint.run/usage/linters/ has a solid set of options.


I just wish the linter was integrated into the compiler. And that code that didn't check would simply not compile


> without these language features I don't see how the Result<T> approach is better.

That's the point! I want language features!

I don't want to wait 6 years for the designers to bake some new operator into the language. I want rich enough expression so that if '?.' is missing I just throw it in as a one-liner.

Generics is one such source of richness.


A language with sun types will express Result as Success XOR Failure. And then to access the Success, the compiler will force you to go through a switch statement that handles each case.


The alternative is not the Result type you defined, but something along the lines of what languages like Rust or Haskell define: https://doc.rust-lang.org/std/result/


It's interesting that you say this, because I've had the opposite experience. I wouldn't say it's strictly inferior, because there are definitely upsides. If it was strictly inferior, why would a modern language be designed that way -- there must be some debate right?

I love multiple returns/errors. I find that I never mistakenly forget to handle an error when the program won't compile because I forgot about the second return value.

I don't use go at work though, I use a language with lots of throw'ing exceptions, and I regularly miss handling exceptions that are hidden in dependencies. This isn't the end of the world in our case, but I prefer to be more explicit.


> If it was strictly inferior, why would a modern language be designed that way

golang is not a modern language (how old it is is irrelevent), and the people who designed it did not have a proper language design background (their other accomplishments are a different matter).

Having worked on larger golang code bases, and I've seen several times where errors are either ignored or overwritten accidentally. It's just bad language design.


I cannot think of a language where errors cannot be ignored. In go it is easy to ignore them, but they stick out and can be marked by static analysis. The problems you describe are not solved at the language level, but by giving programmers enough time and incentives to write durable code.


The following line in golang ignores the error:

    fmt.Println("foo")
Compare to a language with exception handling where an exception will get thrown and bubbles up the stack until it either hits a handler, or crashes the program with a stack trace.

And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }


Usage of linters fixes this:

>The following line in golang ignores the error:

   fmt.Println("foo")
fmt.Println() is blacklisted for obvious reasons, but this:

    a := func() error {
        return nil 
    }
    a()
results in:

    go-lint: Error return value of 'a' is not checked (errcheck)
>And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }
results in:

    go-lint: ineffectual assignment to 'err' (ineffassign)


> fmt.Println() is blacklisted for obvious reasons

That's the issue with the language, there are so many special cases for convenience sake, not for correctness sake. It's obvious why it's excluded, but it doesn't make it correct. Do you want critical software written in such a language?

Furthermore, does that linter work with something like gorm (https://gorm.io/) and its way of handling errors? It's extremely easy to mis-handle errors with it. It's even a widely used library.


Huh, I have seen enough catch blocks in Java code at work which are totally empty. How is that better than ignoring error?


Because it's an explicit opt-in, as opposed to accidental opt out. And static checking can warn you about empty catch blocks.


In rust, errors are difficult to ignore (you need to either allow compiler warnings, which AFAICT nobody sane does, or write something like `let _ = my_fallible_function();` which makes the intent to ignore the error explicit).

Perhaps more fundamental: it’s impossible to accidentally use an uninitialized “success” return value when the function actually failed, which is easy to do in C, C++, Go, etc.


Or .unwrap(), which I see relatively often.


That’s not ignoring errors, it’s explicitly choosing what to do in case of one (crash).


Error handling is hard, period. Error handling in go is no worse than any other language, and in most ways it is better being explicit and non-magic.

> people who designed it did not have a proper language design background

Irrelevant.

> It's just bad language design.

try { ... } catch(Exception ex) { ... }


Exceptions don't lead to silent but dangerous and hard to debug errors. The program fails if exception is not handled.


> try { ... } catch(Exception ex) { ... }

The error here is explicitly handled, and cannot be accidentally ignored. Unlike golang where it's quite easy for errors to go ignored accidentally.


Nevertheless, this is how it is mostly done in Java. I haven't used eclipse in eons, but last time I did it even generated this code.

If you care with go, use errcheck.


Does errcheck work well with gorm (https://gorm.io/) and it's way of returning errors? This is not an obscure library, it's quite widely used.


Does any language save you from explicitly screwing up error handling? Gorm is doing the Go equivalent of:

     class Query {
         class QueryResult {
             Exception error;
             Value result;
         }
         public QueryResult query() {
             try {
                 return doThing();
             } catch(Exception e){
                 return new QueryResult(error, null);
             }
         }
     }
Gorm is going out of its way to make error handling suck.


> Does any language save you from explicitly screwing up error handling?

It's about the default error handling method being sane. In exception based languages, an unhandled error bubbles up until it reaches a handler, or it crashes the program with a stacktrace.

Compare to what golang does, it's somewhat easy to accidentally ignore or overwrite errors. This leads to silent corruption of state, much worse than crashing the program outright.


> It's about the default error handling method being sane.

Gorm isn't using the default error handling.


That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.


> That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.

I've seen a bunch of code that does the equivalent of the Java I posted above. Mostly when sending errors across the network.


because it has try/catch. Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

Each language has its wonks.


> Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

It's not similar to that at all. Without it, the exception bubbles up until it gets caught somewhere, or crashes the program with a useful stacktrace.

With golang, it just goes undetected, and the code keeps running with corrupt state, without anyone knowing any better.


I would say it is a very ergonomic way of doing this. It allows for writing in a more exploratory way until you know what your error handling story is. Then, even if you choose to propagate it later, you just add it to your signature. Also it is very easy to grok and clear. Definitely not strictly inferior.


It's a lot cleaner to pass a Result<T> through a channel or a slice than to create two channels or slices and confirm everyone's following the same convention when using them.


I concede that there are probably scenarios where this design makes sense within that context. I typically find that either I care about a single error and terminating the computation, or I don't care about errors at all. In the former case, the primitives in the sync package (or just an error channel which we send to once and close) are adequate. The latter case presents no issues, of course.

At $work we definitely have examples where we care about preserving errors, and if that tool were implemented in Go a solution like a Result struct containing an error instance and a data type instance could make sense.


It has a bunch of invalid states (message and data both set, neither set, message set but IsSuccess is true, etc.). So you have to either check it every time, or you'll get inconsistent behaviour miles away from where the actual problem is. It's like null but even more so.


Well, for one thing, it doesn't actually work like a proper Optional<T> or Either<T, string> type. It works more like Either<(T, string),(T, string)>, which might have some uses, but isn't typically a thing someone would often reach for if they had a type system that readily supported the other two options.


> What's wrong with this?

That it's mutable, at the very least!


I feel like such a class should either be part of the language, and part of language idioms etc, or it shouldn't be used.


Can you articulate why? it seems to me that 'feel' should not be part of the discussion.


Not GP, but I've sometimes found libraries implementing similar concepts differently causing issues.

E.g.

    libraryA.Result struct {
        Err error
        Data SomeDataType
    }

    libraryB.Result struct {
        err string
        Data SomeDataType
    }
    func (r libraryB.Result) Error() string {
         return r.err
    }
Now you have two different implementations of the same fundamental idea, but they each require different handling. In Go, where many things simply return an error type in addition to whatever value(s), you would now have three different approaches to error handling to deal with as opposed to just whatever the language specified as the best practice.


This is what interfaces are for.

Let your caller bring their own error type and instantiate your library code over that.


Not GP but:

It may frustrate coworkers who need to edit the code.

It adds another dependency into your workflow.


> which then makes it very difficult for anyone else to figure out what is going on

Or we can learn to read them. Just treat types like a first class value. You either assign names to types like you do to values, or you can assign a name to a function that returns a type, this being generics.


> or we can learn to read them

That's an awful way to think about hard to read code. I could produce the most unreadable one liners you've ever seen in your life. We should condemn that and not blame it on others to "learn how to read".


> That's an awful way to think about hard to read code

Most of the time I hear about "hard to read code" is "pattern I don't currently have a mental model for". We didn't move on from COBOL by letting that be a deterrant.


Fair, I've actually seen both types of situations. I only complain after having some domain knowledge of the project and the language/tools. After sufficient understanding, I will make sure that the code that gets merged into master is highly readable. Simple > complicated. Always. Don't be ashamed to write simple code.


You write code for an audience. In that audience, sit yourself in your current state, yourself a year+ from now, your colleagues (you know their level) and the compiler. With bad luck, your current self i a state pulling your hair out to debug.

Think about the audience when you code.


I assume you only program in readable languages like COBOL and AppleScript.


Ah, blub. It will never leave us.


I expect after a flurry of initial excitement, the community will settle on some standards about what it is and is not good for that will tend to resemble "Go 1.0 + a few things" moreso than "A complete rewrite of everything ever done for Go to be in some new 'generic' style".


> I like generics for collections but that is about it.

What about algorithms (sorts, folds, etc) on those containers? I write a lot of numerical code. It sucks to do multiple maintenance for functions that work on arrays of floats, doubles, complex floats, and complex doubles. Templates/Generics are a huge win for me here. Some functions work nicely on integer types too.


I think this is probably the single best use case for Generics for Go - numerical operations on the range of number types.


At this point I'd like to summon to go-generics defense all the PHP and Javascript developers who assert unwaveringly "Bad language design doesn't cause bad code; bad programmers cause bad code."


Counterpoint: languages (and libraries, and frameworks, and platforms) so well-designed that they introduce a "pit of success"[1] such that bad programmers naturally write better code than they would have done otherwise.

For example, what if PHP could somehow detect string-concatenation in SQL queries and instantly `die()` with a beginner-friendly error message explaining to use query parameterisation from the very beginning: tens of billions of dollars of PHP SQL injection vulnerabilities simply never would have happened - and people who were already writing database queries with string-concatenation in VB and Java who gave PHP a try would then be forced to learn about the benefits of parameterisation and they'd then take that improved practice back to their VB and Java projects - a significant net worldwide improvement in code-quality!

[1]: https://blog.codinghorror.com/falling-into-the-pit-of-succes...

I've been writing in TypeScript for about 5 years now - and I'm in-love with its algebraic type system and whenever I switch back to C#/.NET projects it's made me push the limits of what we can do with .NET's type system just so I can have (or at least emulate as closely as possible) the features of TypeScript's type system.

(As for generics - I've wondered "what if every method/function was "generic" insofar as any method's call-site could redefine that method's parameter types and return types? Of course then it comes down to the "structural vs. nominative typing" war... but I'd rather be fighting for a hybrid of the two rather than trying to work-around an poorly-expressive type system.


What about genetics for phantom types?

Ex.

    class ID<T> {
      int id;
    }
The idea is that an ID is just an int under the hood, but ID<User> and ID<Post> are different types so you can’t accidentally pass in a user id where a post is is expected.

Now, this is just a simple example that probably won’t catch too many bugs, but you can do more useful things like have a phantom parameter to represent if the data is sanitized, and then make sure that only sanitized strings are displayed.


Just to note, for this specific example, Go supports this with type definitions:

  // UserID and PostID are distinct types
  type UserID int
  
  type PostID int


This isn't quite the same, because it's just an alias - you can pass a UserID to a function accepting a PostID: https://play.golang.org/p/nSOgcJs_66y

It still provides a documentation benefit of course.

EDIT: Whoops, yes, as lentil points out, they are indeed distinct types not aliases. So it does provide the benefit of the Rust solution.


No, it's not an alias, they are distinct types. You can't use the types interchangeably (unless you cast them).

Your playground example didn't try to pass a UserID to a function accepting a PostID, but if you do that, you'll see the error:

https://play.golang.org/p/vyiJ_sLzy4O


Oh neat! Most languages make it a little bit verbose to create these kinds of wrapper types for type safety (with zero overhead), so it's nice that Go has that.

I think the generic approach is a little bit better because of the flexibility, but this approach is still better than not having it at all.


considering the vast majority of programming involves loops I don't see "just for collections" as some minor thing—it's most of what I do


If you don't understand someone else's code, you can either tell them they stuff is too complicated or learn and understand better. There can be a middle ground of course.


Most of the time if code is hard to understand its bad code. Just because someone writes complex code that uses all the abstractions, doesnt mean its good. Usually it means the opposite


The go team's attempt at involving everyone in the priorities of the language has meant they lost focus on the wisdom of the original design. I spent 10 years writing go and I'm now expecting to have to maintain garbage go2 code as punishment for my experience. I wish they focused on making the language better at what it does, instead of making it look like other languages.


That said the go team is incredibly talented and deserve a lot of kudos for moving much of the web programming discussion into a simpler understanding of concurrency and type safety. Nodejs and go came out at the same time and node is still a concurrency strategy salad.


I'd like generics for concurrency constructs. Obvious ones like Mutex<T> but generics are necessary for a bunch of other constructs like QueueConsumer<T> where I just provide a function from T -> error and it will handle all the concurrent consumption implementation for me. And yes, that's almost just a chan T except for the timeouts and error handling and concurrency level, etc.


And that's among the reasons it's been left out of Go. Go design was guided by experience working on large software systems; the risk with making a language too flexible is that developers begin building domain-specific metalanguages inside the language, and before you know it your monolingual codebase becomes a sliced-up fiefdom of various pieces with mutually-incompatible metasyntax that defeats the primary advantage of using one language: developers being able to transition from one piece of the software to another without in-depth retraining.

For enterprise-level programming (which is the environment Go grew up in), a language that's too flexible is a hindrance, because you can always pay for more eng-hours, but you can't necessarily afford smarter programmers.


There is an underappreciated advantage to using generics in function signatures: they inform you about exactly which properties of your type a function is going to ignore (this is called parametricity: https://en.wikipedia.org/wiki/Parametricity)

For instance, if you have a function `f : Vec<a> -> SomeType`, the fact that `a` is a type variable and not a concrete type gives you a lot of information about `f` for free: namely that it will not use any properties of the type `a`, it cannot inspect values of that type at all. So essentially you already know, without even glancing at the implementation, that `f` can only inspect the structure of the vector, not its contents.


Not all generics are parametric.


Agreed. From a quick skim of the Go generics proposal I get the impression that they are in fact aiming for parametric generics though (in fact they use the term "parametric polymorphism" in the background section).


I like generics but I find that it is often best to start out writing a version which is not generic (i.e. explicitly only support usize or something) then make it generic after that version is written. As a side benefit, I find that this forces me to really think about if it should actually be generic or not. One time I was writing a small Datalog engine in Rust and was initially going to make it take in generic atoms. However, I ended up deciding after going through the above process that I could just use u64 identities and just store a one to one map from the actual atoms to u64 and keep the implementation simpler.

I agree with the sentiment that it is very easy to overuse genetics though there are scenarios where they are very useful.


I can think of a few other potential use cases in Go. Some ideas:

- Promises

- Mutexes like Rust's Mutex<T> (would be much nicer than sync.Mutex)

- swap functions, like swap(pointer to T, pointer to T)

- combinators and other higher-order functions


For java / c#, in my experience, I've done that mistake because in both language the class declaration is very verbose. Then using generic is the only way to solve a problem which can only be solved by dynamic typing / variables.

In typescript I don't need generic too much / too complex, because the typing definition is more lax, and we can use dynamic in the very complex scenario.

I don't know which approach go is taking.


Honestly as long as you learn when to use generics and when to not use them there are a lot of very useful ways to encode state/invariant into the type system.

But I also have seen the problem with overuse of generics and other "advanced" type system features first hand (in libraries but also done by myself before I knew better).


I've done this to one of my pet projects (thankfully unreleased). It just makes debugging/editing on the fly more difficult. I'd love to unwind the mess. But that'll take days fixing I caused in minutes! It's a big foot gun.


Mathematically, almost everything generic can be viewed as a collection.


Functions ;)

s -> (s, a) is generic and a Functor (mappable - often conflated with a collection) but it's no collection!


Yeah i actually think just having a built in genecic linked list, tree and a few other abstract data types would solve 90% of everyones problems. Part of the good thing about go is you solve problem more then you create them.


I agree, those grapes are probably sour anyway...


I cant help feeling it is a missed opportunity to add generics to Go this late. A mistake that is copied from earlier languages (C++, Java), a mistake similar to other mistakes Go chose not to solve at it's inception, like: having implicit nulls (C, C++, Java, JS, C#), lack of proper sum types (C, C++, Java, JS) and only one blessed concurrency model (JS).

While I think I get the reasons for these decision in theory, make a simple-to-fully-understand language that compiles blazingly fast, I still feel it's a pity (most) these issues where not addressed.


Go really should have learned 2 lessons from Java 5:

1. People will eventually want generics 2. Retrofitting generics onto an existing language is hard and leads to unusual problems

(edit: I'm glad Go is doing this, but...Java learned this in 2004.)


There's is a design document for Go generics.

If you see "unusual problems" with the design, then tell us what they are.

Otherwise it's just shallow pattern matching "Java added generics late, they had problems, Go added generics late therefore they'll have problems too".

Counterexample: C# added generics late and it's perfectly fine design.

The reason Go team is not rushing to implement generics is precisely so that the design makes sense, in the context of Go.

Over the years Ian Taylor wrote several designs for generics, all of which were found to not be good enough.

They are doing it the right way: not shipping until they have a good design and they didn't have good design when Go launched.


If this follows the monomorphic approach described in Featherweight Go [1][2], they will at least avoid problems caused by type erasure and avoid too much runtime overhead.

1. https://arxiv.org/abs/2005.11710 2. https://news.ycombinator.com/item?id=23368453


But then you have compile time overhead (an issue Rust and C++ have faced). One of Go's design goals was to have very fast compile times, which might be in doubt if they take the monomorphization approach.


Is rust's slow compile time because of poor LLVM IR generation or just because monomorphization? D has generics and compiles fast. I guess Nim compile times are okay, too..


Go is just Java repeated as farce. The histories are almost identical with ~10 years lag.

We all called this when Go was created, too.


Just without the "VM".


JAVA -> Go + WASM is conservation of MEMEs? :D

(Not to disparage WASM, which has some nice ideas both on the technical and ecosystem level.)


Thanks but I don't need another Java. Enjoy your full Java and keep Go doing things go-way. :)


Are you saying Go should have not launched in 2009, but shout have waited 10 years until generics were ready?


If they made it a priority they could have shipped in 2010 with generics. There is no new art in this design.


Go has the unfortunate circumstances of its birth being before widespread recognition of the value of generics and better type systems. Those ideas existed in many languages, and in the PL community, and they were starting to take hold in other languages, but the consensus for most software engineers was that "sum types" are hard and "generics" aren't necessary and the type system should stay out of the way.

I think TypeScript, Scala, Kotlin, C#, and various others I forget now proved that these things weren't a fad and could yield significant gains in productivity and code correctness.

Had Rob Pike been more forward looking (or hired Anders Hejlsberg or Guy Steele to design the language) or dipped further into the PL research, he might have been so bold himself. I don't think anyone can fault him for it, these were niche and minority views in 2010 and may not even be in the majority today.

I think at the same time, we see what happens when a new language has large corporate backing in more recent years. Swift more closely resembles Rust than Go in terms of its type system.


"Generics are useful" was not even remotely a niche view in 2010. That was already 6 years after Java got them, and the lack of generics in Go was a common criticism from day one.


Philip Wadler introduced generics into Java and had previously designed Haskell, so he must have been thinking about types for at least a further 15 years before Java (20 years before Go).


This is such a bullshit argument. Why is it that any go post on hacker news pulls out all the tropes. Lack of exceptions. Lack of generics. Would generics make go a better a language? Maybe. Does the lack of generics make go objectively bad? Hell no!

I've had a long career coding in C, C++, Java, Lisp, Python, Ruby... you name it I've done it. Go is my favorite most productive language by far for solving typical backend issues. My least favorite? Java by a HUGE mile.


> Why is it that any go post on hacker news pulls out all the tropes. Lack of exceptions. Lack of generics.

It's pretty simple -- those of us who use those feature regularly in other languages know how valuable they are, and we miss them when they aren't there.


> Lack of exceptions

Is that really a problem? I think proper sum types to allow Result types that can encode many success/error states are so much nicer than an exception hierarchy. Rust and Kotlin did not go with exceptions, and for good reasons.

> C, C++, Java, Lisp, Python, Ruby... you name it I've done it.

Let me name a few: Rust, Kotlin, OCaml/Reason, Haskell, Elm. These languages carefully selected a set of features, the all have: no implicit nulls and sum types. And in your list non of them have those features. I really wonder what you think of these features when you've worked with them.


> Kotlin

Kotlin very much did go with exceptions except for the Result type in coroutines which wraps a Throwable anyway and is only used for the border between the coroutine runner and the async functions.


Why do you dislike java so much?


It was a common criticism but I don't think it was a majority criticism. Hacker News is not representative of the internet at large, and the idea that Go doesn't need or might not even benefit from generics is still pervasive. (See the first person to reply to you.)


The Go authors acknowledged the usefulness of generics as far back as 2009, shortly after its public release: https://research.swtch.com/generic


If you copy most of your design from Pascal / Modula 2 / Oberon as a safe bet to use a time-proven approach, this is only natural. If you don't want to use a time-proven approach, you need to design your own, and it's a massively more complex and fraught enterprise than adding GC and channels (which are both old, time-proven designs, too, just from elsewhere).

You could say that one could maybe copy the time-proven OCaml. But OCaml doesn't have a proven concurrency story, unlike Oberon and Modula-2 (yes, it had working coroutines back in 1992).

I also wish all these design decisions would not have been made in a new language, like they haven't been made in Rust. Unfortunately, the constraints under which creators of Go operated likely did not allow for such a luxury. As a result, Go is a language with first-class concurrency which one can get a grip of in a weekend, quick to market, and fast to compile. Most "better" languages, like Rust, OCaml, or Haskell, don't have most of these qualities. Go just fills a different segment, and there's a lot of demand in that segment.


>As a result, Go is a language with first-class concurrency which one can get a grip of in a weekend

Which is a mess. Sending mutable objects over channels is anything but "proven concurrency story".

Both Rust and Haskell (and OCaml, if we talk about concurrency and not parallelism) have way better concurrency story than Go. I don't care how fast one could start to write concurrent and parallel code if this code is error prone.

The only difference between Rust/Haskell and Go is that the former force you to learn how to write the correct code, while the latter hides the rocks under the water, letting you hit them in production.


I used "first-class" here to denote "explicitly a feature, a thing you can operate on", like "first-class functions" [1]. I didn't mean to say "best in class". I don't even think we have a definite winner in this area.

[1]: https://en.wikipedia.org/wiki/First-class_function


> first-class concurrency

That's an overstatement.

Also implicit nulls are not beneficent to anyone. And sum types could have made results (error/success) so much nicer. I see no reason to go with nulls at Go's inception, hence I call it a mistake.


The flip side was on the top of HN yesterday:

Generics and Compile-Time in Rust

https://news.ycombinator.com/item?id=23534974

It's easy for spectators / bystanders to call something a mistake because you don't understand the tradeoffs. Try designing and implementing a language and you'll see the tradeoffs more clearly.


D has generics and compiles fast. There are probably other examples too.

The overlooked thing is rustc produces poor quality LLVM IR which is also mentioned in FAQ.

And generics reduce amount of manual for loop juggling code one has to write. I don't think adding generics makes much of a difference.


> The overlooked thing is rustc produces poor quality LLVM IR which is also mentioned in FAQ.

LLVM is also slow in general. If you use it, it's likely the thing that's bottlenecking your language's compile-time unless you've done something insane like templates and `#include`, etc..

Inevitably it's the case that even if your source language doesn't do nearly as badly at the design stage when it comes to generics as C++ does, if you use LLVM your build stage is probably going to be unacceptably slow.


> LLVM is also slow in general.

Agreed. But so is GCC. And I guess many of the 'zero cost' abstractions require some advanced optimizing compiler like LLVM to be zero cost (or move that complexity to compiler end).

They specifically mentioned the technical debt and poor LLVM IR Generation issue though. I wonder if it has yet gotten attention or fixed. Maybe @pcwalton knows.


OK sure, now write your own backend or use one other than LLVM (which one?). Now you're going to learn about a different set of tradeoffs.


Go and D made the tradeoff. And both have good compile times. Both of them have GCC and LLVM frontends as well as a homegrown one. And homegrown ones have fast compile times.

Edit: well fast compile times at the expense of optimization. But that kind of proves the point.


I feel the same way, but then again Rust (among others) exists so it's not like those of us who dislike this approach are "stuck" with Go. I think it's actually nice to have the choice, reading the comment in this thread it's pretty clear that there are people who don't feel like we do.

Go clearly values "simple and practical" over "elegant". It seems to be quite successful at that.


I have to refute this 'simple and practical' claim.

Not having generics and neither having very common tools doesn't seem very good. You will have to write a for loop for what is a simple function call in python or javascript or <insert modern language here>. Such detail easily interrupts reading / writing flow.


I agree with the spirit of what you are saying, but I'd nit pick and say that lack of generics and presence of implicit null types make Go not simple and not practical over other options in the same space.


As someone using Java which has null and hardly using any generics even though they are available in Java. I find Java immensely practical with huge number of libraries and other facilities of ecosystem.

Seems you are of the opinion if you do not find something practical no one else can.


And I find go immensely practical and Java immensely impractical. But I see its value. It's almost like we have different languages because people are myriad :)

Problem-space and learning styles play a huge role.


Having seen Java shortcomings up and close I can totally understand that. It is just that some folks think their subjective opinion about programing are some universal objective truths.


That's exactly it. There is Rust and Java already. Use it, please. Don't try to make another Java from Go. One Java is enough.


> having implicit nulls (C, C++, Java, JS, C#),

> lack of proper sum types (C, C++, Java, JS)

Incidentally, Java and C# have addressed (or are in the process of addressing) both issues. Both languages/platforms are superior to golang in almost every conceivable way.


I've used a lot of Java and C#, and a decent amount of Go. I'm not sure I would call Go inferior. The design goals were different. I'm also not a language wonk, so maybe that's why I enjoy the relative simplicity of Go. The developer loop of code, run, code is very fast, and the standard library is good out the box. I just want to write code and get stuff done. To that end, Go is another capable, workman type language (like Java or C#).


Neither have addressed it though - they do have works in progress. Same as the lack of decent threading in Java.

It's been a WIP for years. Lets judge based on status quo rather than being disingenious.


How does Java not have decent threading? Go doesn't even have actual threads.

I'm assuming you're talking about goroutines aka virtual threads/fibers whatever which are entirely different from actual threads.


apologies, but yes, you're correct - I was referring to green threads


Hopefully will preview in next release (6 months release cycle) http://cr.openjdk.java.net/~rpressler/loom/loom/sol1_part1.h...

EAP build - http://jdk.java.net/loom/


I'm not aware of any current Java project on Sum types. There is multi-catch for exceptions but no immediate plans I know to use elsewhere.


A form of pattern matching and switch expression has already made it to the language as of JDK 14. Those are paving the way for full blown pattern matching and sealed types:

https://cr.openjdk.java.net/~briangoetz/amber/datum.html

https://openjdk.java.net/jeps/360


Java aren't fixing nulls, IMHO. They are making it worse.


How are they making it worse?

In any case, there already exist solutions in place:

https://checkerframework.org/manual/#example-use

https://github.com/uber/NullAway


The code ends up looking like:

  Optional<Integer> foo;

  //....

  if (foo != null) {
      foo.and_then(new Consumer () {
            function accept(Integer foo) {

            }
      });
  }


I would never assign null to an Optional nor use a library that returned one.

To ensure assignment before use make it final.


Except for green threads.


That's true though Project Loom is working on fibers. There's already an early access release to test.


Minor point, they've settled on the name "virtual threads" instead at this point.


Sum types are not planned for C# 9, are they?


It has pattern matching, but not a full discriminated union implementation yet. That seems to be on the roadmap though:

https://github.com/dotnet/csharplang/issues/113

https://github.com/dotnet/csharplang/issues/485


Swift had generics in place from Day One. It also hides the generics when they don't need to be shown (like many collection types).

I think that most of the standard language features are based on generics, but you would never know it, as a user of those features.


To be fair, Swift have a bit strange generics implementation that forces programmer to jumps through hoops to achieve something quite common in similar programming languages. The whole standard library is peppered with AnyThis and AnyThat, the language doesn't have generators (yield) and I'm not sure they're possible with the current generics design and programmers needs to learn what is type erasure just because the core team decided that returning generic interface from a function is something nobody will want.

I like Swift a lot, for many reasons, but generics design isn't one of these reasons.


Fair 'nuff. I was always a fan of the way C++ did it, but it was a great deal more technical.

Swift is sort of "generics for the masses." C++ is a lot more powerful, but also a lot more "in your face." I really do like the way that Swift allows generics to have implicit types.


Can't get enough of those template errors.


This looks like a good, minimal implementation of generics. It's mostly parameterized types. Go already had parameterized types, but only the built-ins - maps and channels. Like this.

    var v chan int
Somewhat surprisingly, the new generic types use a different syntax:

    var v Vector(int)
Do you now write

    v := Vector(int)
or

    v := make(Vector(int))
Not sure. The built-in generics now seem more special than they need to be. This is how languages acquire cruft.

The solution to parameterized types which reference built-in operators is similarly minimal. You can't define the built in operators for new types. That's probably a good thing. Operator overloading for non-mathematical purposes seldom ends well.

They stopped before the template system became Turing-complete (I think) which prevents people from getting carried away and trying to write programs that run at compile time.

Overall, code written with this should be readable. C++ templates have reached the "you are not supposed to understand this" point. Don't need to go there again.


Presumably you write

    v := Vector(int){}
Or

    v := make(Vector(int))
Just like with unparameterized types.


It's still fleshing out the details, and this is a great point to bring up. I suspect it may be a shim to make the parser more manageable. I for one would love to see all generics harmonize with the existing syntax for map, chan, etc.


Even the current syntax is a bit non coherent. Would you go with "Vector[int]" a la map, or with "Vector int" a la chan? I think both considered the proposed syntax is actually better.


In my opinion, `Vector int` is the more logical syntax. `Vector[int]` to me seems like a vector indexed by int (rather than containing int), much like `map[int]T` is a map indexed by int, and `[5]T` is an array indexed by an integer less than 5.

For the same reason, I dislike `std::array<T, 5>` which puts the range type before the domain constraint. This is inconsistent with `std::map<int, T>` which puts the domain type before the range type, and `[](int x) -> T` which puts the domain type before the range type.


> `Vector int` is the more logical syntax.

This introduces several parser ambiguities, esp. related to nested template types or types with multiple parameters. For instance, does `Foo Bar Baz` read as `Foo(Bar, Baz)` or `Foo(Bar(Baz))`?


Right. For consistency, the old built-in generics should be written

    var v chan(int)
    var m map(keytype,datatype)
    
instead of

    var v chan int
    var m map[keytype]datatype
but, for backwards compatibility, that probably won't happen. Thus, cruft.

Map was always a bit too special in Go. Channels really are special; the compiler knows quite a bit about channels, what with "select", "case", and "go" keywords and their machinery. But maps are just a built in library object.


If you want a dynamically sized vector pointing to a datatype, I think `Vector [] Value` is most "semantically meaningful". And if you want to define your own map, `Map [Key] Value` is most self-explanatory... now that I look at it, I'm starting to feel that this introduces too many special cases and complexity into generic syntax to be useful for the actual language parser. I might stick with that in documentation/comments instead.


I wonder if at least they will add this as an additional way to define them and deprecate the old ones without disabling them.


They won't. Go leans very hard towards "there is exactly one way to do it".


I know they won't but now there are 3 ways to express the same idea!


I agree with your point about syntax - we're still not at the point where you could roll your own map-like type, and I think that should be a goal for this implementation.

And yeah, operator overloading leads to utterly illegible code. Let's not go there!


What is missing for a "map-like type" other than operator overloading ([], []=)?


Not a strict requirement, but hashing pointer values.

In Go, a pointer's integer value may not be stable since the GC may move the underlying data. This doesn't happen with today's runtime, but it may in the future.

Why does this matter? What if I want to make a hash table with the key being a pointer? The built-in map can just go ahead and hash the pointer itself (i.e. a 32/64-bit integer) since it's a part of the runtime and will be updated if the GC is changed. But a user map cannot do this: it would have to hash the actual value itself. Depending on your data, this could be significantly more expensive.

This doesn't stop anyone from implementing a hash table, but it may mean that it can't be as fast as the built-in map.


I don't really understand the question. Is [] an operator?


Yes. "[]" would denote the operator for accessing element of collection associated with key/index k by [k] syntax. Similarly "[]=" would denote operator for storing element under a given key/index.

E.g. in python these would correspond to a custom type implementing https://docs.python.org/3/reference/datamodel.html#object.__... & https://docs.python.org/3/reference/datamodel.html#object.__...


OK, yeah, I get it now.

so if I defined my own map type (e.g. orderedmap[T] that kept things in the order they were added) then I'd need to write code for the [] operator to access an element, is that right?

I guess I'm not really sure how that differs from declaring orderedmap(T) (as it is in the proposal). Why do I need to overload one and not the other?


Finally. Better late than never. I have to write a lot of Go code at $BIGCORP and keep reaching for generic helper functions like Keys or Map or Filter or Contains. This is going to make my code much more readable.


Will it really?

Old:

  sum := 0
  for _, x := range xs {
      sum += xs
  }
New:

  slices.Reduce(xs, 0, func(i, j int) int {
      return i + j
  })


That's the most compelling example you could come up with as someone who presumably writes Go?

Reversing a slice of T. Copy and pasting this:

    for i := len(a)/2 - 1; i >= 0; i-- {
        opp := len(a) - 1 - i
        a[i], a[opp] = a[opp], a[i]
    }
New:

    reverse(a)
Or generic versions of these: https://gobyexample.com/collection-functions


Yep, this is exactly the use case I was talking about.


This is an argument I've seen played out in other language communities. In the JS/TypeScript world, the current evolving consensus is "most of the functional programming methods improve readability, except reduce, which is usually worse than a for loop."

I think the other replies to your comment point out the improvements that non-reduce-based versions of the code would bring, but I wanted to specifically call out reduce as being a pretty awkward function in other languages as well. So I think it's not a strong argument against generics (and the OP didn't mention wanting it in the first place); by and large functional methods like map, filter, reverse, etc are more readable; reduce is the exception that often isn't.

I wrote a fair bit of production Go code in my past, and missed many functional programming patterns. Didn't really miss reduce though. I'm glad that it seems like the ones I missed will become possible with this generics proposal.

And reduce isn't horribly unreadable, it's just a slight degradation compared to a for loop IMO. And the other functions are a large improvement.


New:

    slices.Sum()


Yeah no real gain for that case, but for something like filter, definitely a win.


Here's a four-line replacement for the various `sql.Null*` types in the standard library.

    type Null(type T) struct {
        Val   T
        Valid bool // Valid is true if Val is not NULL
    }
Here's what it looks like in practice: https://go2goplay.golang.org/p/Qj8MqYWWAc3


neat example, thanks. It gives you a generic Null type that has a bool is-valid field. Very similar to the sql.Null{String,Int} types, but you don't have to declare each one. Kinda the point of generics :)


Nice. But I need to retrain myself to read Go code again, because

  func Make(type T)(v T) Null(T) {
  }
looks really confusing and unreadable to me - too many brackets...


I agree. I assume they avoided the standard <> brackets to keep things simple, but really it just makes it harder to parse visually.


I consider the parens () syntax much more readable than the brackets < >


Well, you can try to craft a generic function that returns a function that takes T and returns something of type T, and see how many () you gotta get...


Unlike other commenters, I don't think it's too late to add generics to Go. I'm looking forward to this.

My only negative reaction is that throughout these proposals, I've not been a fan of the additional parentheses needed for this syntax. I can think of several syntaxes that would be more readable, potentially at the expense of verbosity.


You just can't please everyone.

I think that there are benefits when taking the time to gather feedback from people who actually use the language and make educated decisions with regards to language features instead of shoving them down in v1.


I am glad contracts are being dropped. It was very confusing indeed. Thanks for listening to the community.


Just the concept name is dropped, the concept itself is still there.


Maybe it's because I'm a newcomer to Go, but I'm surprised by all of the shade being thrown in the comments here. I think the design doc presents a clean and simple way to implement generics in Go -- and therefore it's very much in keeping with the spirit Go. I especially like the constraints system. Constraining generic types via interfaces just feels "right" for the language. It leans on a familiar mechanism with just enough leverage.

I'm not without concerns, but I'm struck by conflicting thoughts: I share the worry that implementing generics will negatively affect Go program readability which Go developers (rightfully) covet; when you have a hammer, everything looks like a nail. And yet, I also worry that the Go community is at risk of analysis paralysis on iterating the language in meaningful ways; the novel and innovative spirit that created Go must be harnessed to propel it into the future, lest it dies on the vine.



I'm usually a bit of a skeptic when it comes to Go, but this proposal surprised me; for the most part, or seems like this is exactly what I would want if I were a Go programmer. Being able to define both unbounded generic parameters and parameters bounded by interfaces is key, and while from skimming I'm not sure if it allows giving multiple interfaces as a bound, in practice this shouldn't be a huge issue due to how Go's interfaces are implicitly implemented (which would allow you to define a new interface that's the union of all the other interfaces you want to require, and all the types that implement all of them will automatically implement the new one). Interestingly, the proposal also defines what is essentially a union type interface, which I think is something that Go definitely could use. Although there are a couple of minor warts (e.g. not being able to have generic methods) and the syntax is not what I'd personally pick, overall I'm pretty impressed.


As someone who doesn't enjoy Go but is increasingly having to deal with it at work, I'm very happy with it.


“A type parameter with the comparable constraint accepts as a type argument any comparable type. It permits the use of == and != with values of that type parameter”

I think that’s an unfortunate choice. Quite a few other languages use the term equatable for that, and have comparable for types that have =, ≤ and similar defined. Using comparable here closes the door for adding that later.

I also find it unfortunate that they chose

  type SignedInteger interface {
    type int, int8, int16, int32, int64
  }
as that means other libraries cannot extend the set of types satisfying the constraint.

One couldn’t, for example, have one’s biginteger class reuse a generic gcd or lcm function.


Languages have been known to use “Ordered”, “Orderable”, or “Ord” for what you’re calling “comparable”. Rust and Haskell are both languages that fit this criteria.

The design draft refers to “constraints.Ordered”, so they’re definitely thinking about having both “comparable” and “constraints.Ordered”. Although, for consistency with “comparable”, I think it should be called “constraints.Orderable”.


That’s true, but both rust (https://doc.rust-lang.org/beta/std/cmp/trait.Eq.html) and Haskell (https://hackage.haskell.org/package/base-4.12.0.0/docs/Data-...) use eq or similar for what this proposal calls comparable, as do C# (IEquatable), java (through Object.equals), and Swift (Equatable)


Sure, but as someone else already pointed out, “comparable” is an established Go term that refers to “==“ and “!=“ only, and “ordered” refers to the other comparison operators.

My point was that “comparable” is not universally used in place of the “Ordered” term that the Go team is using, as you were seemingly implying. Ordered is a perfectly fine term for it.

You said:

> Using comparable here closes the door for adding that later.

But the door is not closed in any way. It’s just called “constraints.Ordered”, which is perfectly reasonable.


In my opinion, ordered or orderable is a better name than comparable for types implementing (<=). It evokes more of a "totally ordered" vibe than just saying "comparable", which is what we actually want in these types (in order to sort them and so on).


FWIW, “comparable” and “ordered” are well defined terms in the Go language specification:

https://golang.org/ref/spec#Comparison_operators


The vocabulary from that language spec is not even followed consistently in the language's own standard library.

The `strings.Compare`[1] function is used to establish ordering, in the spec sense. You'd think they would name it "Order".

Similarly, the popular go-cmp[2] library provides `Equal` for equality, instead of `Compare`.

[1]: https://godoc.org/strings#Compare [2]: https://github.com/google/go-cmp


So what?


This is the new code branch: https://go.googlesource.com/go/+/refs/heads/dev.go2go

I can't find the actual code review for this. It seems to be hidden/private still?

This was the previous code review: https://go-review.googlesource.com/c/go/+/187317

The comment at the end is where I got the link to the new branch, but as an outsider, I don't have any good way to ask where the new code review is, so I'm leaving this comment here in hopes that a googler will see it and point me in the right direction.

Based on a link that's on the new branch's page, it might be CL 771577, but it says I don't have permission to view it, so I'm not sure.


There isn't a code review for the changes on the dev.go2go branch (though you could construct one using git diff).

The dev.go2go branch will not be merged into the main Go development tree. The branch exists mainly to support the translation tool, which is for experimenting with. Any work that flows into the main Go development will go through the code review process as usual.


In my mind, this generics flip-flop will do no good for the long term survival of Golang. One way to view it is as "listening to the community and changing" but I think a change this big signals a fracture more than an evolution.

Think about how much planning and foresight goes into making a programming language, especially one that comes out of a massive top-tier tech company like Google. They deliberately chose not to include generics and (from what I remember when I wrote Go code) spent time + effort explaining away the need for them. Now the decision is being reversed, and it signals to me that they don't know what they're doing with the language, long term. As a developer, why do I want to hitch my carts to a language whose core visions are very clearly not correct or what people want?


Or alternatively, they took forever to do it not because they don't like generics, but because they are super conservative with design and wanted to do it right.


Is there anything about this proposal that would have been surprising to someone 5 years ago? Anything to show for waiting most of the decade other than a decade lost?


Judging past work by the "obviousness" of the final solution is shallow, juvenile, and dismissive. Every problem is easy when viewed through the lens of of a nearly-complete solution. There have been a wide variety of published proposals for generics in Go [1], each of which seemed plausible but had some insufficiency. Who knows how many proposals were conceived but never left the developer's machine.

If it's so damn obvious/you're so brilliant where's your proposal dated Jun 2010 (your 'decade lost') that "solves" generics?

[1]: https://github.com/golang/go/issues/15292


It is surprising to the people who have been feverishly trying to add generics to Go, with references to their efforts dating back even before Go1.

It may not be surprising to everyone. Trouble is that the people who are not surprised now are the ones who sat back and only watched, preventing their knowledge from making it to the project.

Luckily for Go, a small team of domain experts decided to do more than sit back and their efforts are how the latest proposal was reached.


From very, very early in the project-

"Generics may well be added at some point. We don't feel an urgency for them, although we understand some programmers do."

...

"Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it."

...

"The topic remains open"

They haven't flip-flopped whatsoever. Now that the language has more thoroughly been fleshed out and community has matured, and various proposals have come and gone, the discussion continues.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: