Hacker News new | past | comments | ask | show | jobs | submit login
Golang generics proposal has been accepted (github.com/golang)
276 points by komuW on Feb 10, 2021 | hide | past | favorite | 168 comments



IMO the biggest factors for the success of Go are 1) super-fast compile times, 2) easy to interpret compiler errors, and 3) dead simple shipment of high performance, native static binaries.

I think Go has succeeded, despite, not because of the language itself. One very big limitation being the lack of generics or any sort of ability to leverage higher-order types. Sometimes making a small modification to a large codebase can require a huge footprint of lines modified, just because so much code winds up duplicated in slightly different contexts.

That being said, I really hope that Go's core team understands the drivers of its popularity and doesn't compromise the operational side for the sake of language improvements. Although higher-typed languages have no trouble achieving good runtime performance, it seems like there's a fundamental tradeoff at compile time. Scala, Haskell, even Typescript have painful compile times. I don't know if there's any theoretical reason for it to hold, but more typing complexity inevitably leads to slow compile times.

And as for the topic of clear error messages, the higher-typed languages are all atrocious at this. Even templates in C++ are notorious for puking near impossible to decipher errors. This is something I'm sure can be fixed with enough engineering effort, but it would probably take a lot of effort to get there.

In general, I bitch about the Go language all the time. But I think we should recognize that the simplicity of the language gives us developers a lot of peripheral really nice usability benefits.


> Although higher-typed languages have no trouble achieving good runtime performance, it seems like there's a fundamental tradeoff at compile time. Scala, Haskell, even Typescript have painful compile times.

OCaml has always been known for having really fast compile times, and could be described as a "higher-typed language" I think.

> And as for the topic of clear error messages, the higher-typed languages are all atrocious at this.

Elm and Rust have made good progress here and have some of the most readable and useful errors out here, especially for beginners.


I think specific features make compile time very long. Nullable types + ADTs / sum types / 'enums' are not one of them, because they don't add exponentials to the compile time like type inference & operator overloading does for example.

Also adding nullables after the fact isn't very hard, other languages such as dart & objective-c have done it. The migration path isn't that hard either, since it is a completely new syntax that doesn't have to be a breaking change, so migrations can be gradual. Out of all the features that the new higher typed languages give, I think nullables are probably the best one out of them, and one of the cheapest for all the others to adopt. And while your adding nullables, you might as well add sum types because that is how you implement it anyway.


> Even templates in C++ are notorious for puking near impossible to decipher errors.

"Even"? I actually miss the macro-like power of templates when I'm using generics in Java and C#, but generating the longest possible error message using templates is practically an Olympic sport. I presume that SFINAE is partially to blame, because a lot of the output enumerates all the candidates that didn't match.


This is why I've always hated C++. Taking something like templates, and abusing the turing-completeness of it. If I wanted macros I would use a language that actually supports them so I wouldn't have to resort to byzantine hacks. Stuff like this is why lots of people would rather use C over C++.


D language is probably the Goldilocks of programming languages for the rest of us in terms of compilation speed (comparable to Go), program execution (comparable to C/C++) and ease of programming and debugging (comparable to Python). The author of the D language despises macro more than you, and has designed a solution for modern and sane approaches to template programming [1][2]. If you want to program like C in D modern eco system there is a safer betterC. Heck now there is even borrow checking capability if you want safety guarantees similar to Rust.

[1]https://mobile.twitter.com/WalterBright/status/1343128003178...

[2]https://dlang.org/blog/2018/03/29/std-variant-is-everything-...


Doesn't C++ exactly support C macros? In what way is using C macros "better" than using the exact same macros in C++?


I love-hate C++ templates because deep down they are just macros and you don't have to obsess about constraints like in Java and C#.

To me it's pretty interesting how the three big static OO languages have implemented generics: super flexible permissive macros in C++ (with murderous error messages), type erasure in Java (gross) and actual generics in the runtime (C#). For everyday usage it doesn't matter much but I always like thinking about what's going on under the hood.


This is my usual nitpick, but pretty much every language does type erasure and there is nothing bad with it, blahblah.

Haskell is arguably great with types, and it does type erasure as well.


> That being said, I really hope that Go's core team understands the drivers of its popularity and doesn't compromise the operational side for the sake of language improvements. Although higher-typed languages have no trouble achieving good runtime performance, it seems like there's a fundamental tradeoff at compile time.

Indeed, this is part of the reason the generics have been so long in coming to Go; you can find discussions about how to implement them as far back as 2009. One of the reasons they aren't using `<T>` to indicate them is specifically because it drastically changes the speed of parsing (since you would need to figure out whether that `<` is a "less than" operator or a syntax for generics).

I can't find the reference right now, but there was a quote in one of the early discussions that was something like:

The Generic Paradox is this: You can have slow programmers (no generics; programmers have to do their own code duplication), slow executables (where you do loads of unboxing), or slow compile times.

EDIT: Found the reference; the exact quote is: "The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?" [1]

The Golang generics proposal was specifically designed to try to reduce the "slow programmers" effect while avoiding either of the other two effects as much as possible.

[1] https://research.swtch.com/generic


> Scala, Haskell, even Typescript have painful compile times.

Can’t speak to the first two, but in some places TS has pathological compile times when you’re trying to give more information to the compiler. A recent example is trying to appease type inference on a set of generics where a wrapped function couldn’t narrow types from its wrapper even though they were totally valid but mostly inferred. I wrote a type guard which passed the same values with explicitly narrowed types and... it just never completes. Pegs all my CPU cores in VSCode. Used throughout my project and effectively kills type checking.

I know enough about the underlying function I’m calling that I just cast to any, but now I have a mine waiting to be triggered.

To be fair, this is in frontend land where it’s not TS’s fault that the types they support are so bonkers to begin with. But holy hell it’s not always obvious what’s gonna cause compile times to skyrocket.


You can try ReasonML/Bucklescript/ReScript. WARNING it may spoil you, by compiling in fractions of a second and make waiting for tsc miserable. Only do it if you have a high frustration tolerance.


4) It's standard library is also big, modern and well documented.


> high performance

I would not call 2x slower than C high performance. Decent performance, yes. But certainly not high.

It has easy to use concurrency, so it waits less than typical C code. This might lead to some high perf claims. But it is not


If you put Python and Ruby in the same graph Go and C will look more the same. Though let's be honest, how much does language speed really matter for most things? It's crap code that kills performance more than anything from my limited experience, and with a higher level language you might have easier access to libraries implementing efficient algorithms for.

When good performance is needed we don't even "write code" anymore, we use ASICs. BTC mining, network switching/routing and encryption comes to mind.


This was to be expected. But I'm glad for Go.

In some years a language that interops with Go comes out, where all the Go types have a ?-suffix indicating they are nullable. The language will be mostly null-safe. Also it will sport sumtypes and pattern matching/ destructuring in switch statements.

It will be called: Gotlin.


Of course I'm joking, but I think implicit nullability is the worst part of Go. A language designed in a era when this was widely known as the billion dollar mistake (prolly even more expensive).

But this is not easily reversed. Not as easy as tagging generics onto the language. And the parallels with Java's maturing are just lovely. Also, we have seen what Kotlin is now doing for Java: a new language was needed to truly fix that one mistake (implicit nullability), introduced in the grandparent of Java (namely C), and the great old grand uncle of Go.


Now that I've worked professionally in a whole bunch of languages that attempt to delete implicit nullability out of existence, I long for it's return. Option monads are a two billion dollar mistake.

The fact is when you're working with any data coming from any other system, the data is or will become null, somehow, some way, and your program code which treats this as impossible is just literally wrong in a way that is complete jibberish. Additionally, programmers don't want to pass huge lists of parameters to every function, but instead bundle things into structs to be easily passed around, however this model makes it impossible to treat a value as Optional at an early part of the callstack and Non-optional later in the callstack after it's been checked and verified. So you either pass everything as a separate parameter, copy things into different structs all over the place, or just make the value Optional everywhere, deleting the usefulness of making Optionals.... Optional. Just let it be null everywhere, and if it's null somewhere it shouldn't be, the program throws an error--like it should, because there's an error.

Actually, javascript is the only language that has it right. Not only can anything be null, anything can be undefined, (which isn't even remotely similar and anyone who doesn't understand why doesn't belong in the conversation,) AND values you don't know about can exist.


> and your program code which treats this as impossible is just literally wrong

That's fine though. The program that treats it as possible but fails to handle it in every possible spot is wrong too, but wrong in a completely unpredictable place, way and even number of times.

It's freeing to start out assuming "title" can never be null. If it is, you get a quick failure at the very edge of your application where the data comes in. You make it nullable, and the compiler tells you all the places where you made the wrong assumption and now need a fix. Or even better yet, you give it a default value, even an empty string, and nothing more is required.


> copy things into different structs all over the place

Sanitizing input is a very common pattern and I don't see why this option is such a deal breaker. And it would normally just be needed at the system boundary, rather than "all over the place".

Or, you share types between different services using something like grpc or thrift, and then you really can trust that the values coming in are not going to be null.

I definitely don't agree that settling for implicit null is the best option.


> The fact is when you're working with any data coming from any other system, the data is or will become null, somehow, some way, and your program code which treats this as impossible is just literally wrong in a way that is complete jibberish.

i don't really understand this part. you can't treat it as impossible, because `Optional[T]` is a different type than `T`. anywhere you try to use an `Optional[T]` as a `T` is a type error.

i think you're saying "there are some cases where `T` can also be null that are now unaccounted for," which i don't think is really relevant. that's more a matter of API design; a hypothetical function that receives data from a server should return an `Optional[T]`. one that treats the operation as infallible would throw an exception just like in an implicitly null language, right?


I don’t see how any of the points you made is a great defense of keeping nulls around.

> The fact is when you're working with any data coming from any other system, the data is or will become null, somehow, some way, and your program code which treats this as impossible is just literally wrong in a way that is complete jibberish.

Your program and the other program has to agree on some protocol. That can have optionality built into it if you expect them to not send certain fields. If that protocol is violated then either you made a mistake (protocol was not accurate) or they did (sent poorly formatted data); either way, you can handle the error without nulls.

> however this model makes it impossible to treat a value as Optional at an early part of the callstack and Non-optional later in the callstack after it's been checked and verified.

I think what’s being described here could be easily encoded as variants/sum types, or you could have functions that give optional fields default values. Without a specific example it’s hard to discuss what you’re really trying to say.


> Just let it be null everywhere, and if it's null somewhere it shouldn't be, the program throws an error--like it should, because there's an error.

Optionals everywhere makes that explicit, vs. something implicit that you can forget.


>Actually, javascript is the only language that has it right. Not only can anything be null, anything can be undefined, (which isn't even remotely similar and anyone who doesn't understand why doesn't belong in the conversation,) AND values you don't know about can exist.

If you mean `unknown`, that's a TypeScript thing.[0]

And personally I love working on a code-base and with coders that differentiate between null and unknown. But there's no denying it's a footgun and many get no value besides bugs from these two. It's kind of similar to how JS gets boolean comparisons weird, which is worked-around by convention

[0] https://www.typescriptlang.org/docs/handbook/release-notes/t...


> Actually, javascript is the only language that has it right. Not only can anything be null, anything can be undefined

I think they are very similar in that there are now 2 nully things to guard for.

Option types suck in Java as they were tagged on later. Maybe in Haskell, or even nicer, Maybe in Elm is were the party's at. It is build in at the center of the language. And getting a value (v) from a list by an index should give you a "Maybe v", as you could be out of bounds. That's the strong typing that basically keeps me safe when viciously attacking a code base I have not touched for some time (or is not written by me to begin with).


I kind of agree, when working on a large team.

Adding optional arguments everywhere is a bit like declaring checked exceptions everywhere. It's true that optionals make the code more explicit, and they force developers to handle every situation, but in the end you still need developers who actually care about writing readable and maintainable code for it to be a net gain.

Something I have come across a fair bit is code where too many values are declared as optional, so the developers try to build an elegant solution by mapping functions over these optional values, returning a default at the end. If there is a value, then the expected code path gets called, else it just silently skips the rest of the code and returns back up to the top layer. This causes a similar issue to what can happen with promises - you lose the context of where the first "error" happened. This could be avoided by eagerly throwing an exception in the first absent case, but that takes us back to where we started with null.

I think the "best practice" way of using optional values is to get buy-in from all the developers on the project that basically nothing should be optional, ever. The goal, then, is to explicitly throw an exception (or return an error) at the very top level and ensure that only actual values enter into the main part of the project's code. But enforcing that is more of a management problem than a technical problem.


> Additionally, programmers don't want to pass huge lists of parameters to every function, but instead bundle things into structs to be easily passed around, however this model makes it impossible to treat a value as Optional at an early part of the callstack and Non-optional later in the callstack after it's been checked and verified.

No, it doesn't; there's a couple easy, obvious ways to do that depending on how you get the data components and how far it is useful to pass them as a group without ensuring none are null. It's not even mildly challenging, much less impossible.


I've been experimenting with Rust's Rocket webapp framework, and as much as I like it, I agree it's very annoying how a struct can't permit nulls in one context but not another. I wish there were an easy solution that didn't require near-identical struct definitions and excess copying memory from one place to another.


In Rust you can solve this via derive or a macro. Typescript's conditional types and the associated utility types are a nice solution to this problem as well. You can do a ton with Partial, Required, Pick, Omit, Exclude etc.


Thank you both for these tips. I'll see what I can figure out! :-)


Seems like a good opportunity for a custom derive.


This is not an issue with Option at all. When you receive data from an external system, you should always parse it before making assumptions. By parsing I don't mean read property x as string, but make it a CustomerId for example, considering your constraints before accepting the value.


Golang needed sum types (for options) and match statements (python got it after all...) way before it needed generics. CHANGE MY VIEW.


Sum and intersection types + pattern matching feels so natural in languages like F#, OCaml, Reason, and Elixir. I know they add a significant amount of complexity to the language, but the more I use them, the more I feel that the tradeoff lands in the goldilocks "just right" zone. I find myself missing them often.


> feels so natural

Yes. They are as important as records/structs a.k.a. product types.

> F#, OCaml, Reason, and Elixir

Haskell, Elm, PureScript, Kotlin, Idris, ...

> I know they add a significant amount of complexity to the language

Really? I dont think it can be so much more than generics :)

Elm has 'm and the whole language is 5k lines of code.


How exactly are you going to implement a type safe option type without generics? You either have to use interface{}, which is obviously not type safe, or write/generate an option type for every type contained within it. Sum types are a lot less useful if you don't have generics.


It could have been built in to the language. Tons of builtin features (slices, maps, channels) are already "generic".


This


Yes. But null-safety is absolute on the top (as it is soooo hard, maybe even impossible, to add later).

Generics... Well, I just come from a Elm gig (knowing Haskell/C++/Java/Kotlin/Ruby) and I must say I was not too bothered with the lack of generics there.


Wait, doesn't Elm have parametric polymorphism like other ML-family languages? That's just as powerful as most languages' generics, and with a lot fewer brackets.


> Well, I just come from a Elm gig and I must say I was not too bothered with the lack of generics there.

Elm has generics.


Really? Why then there's List.map, Maybe.map, Array.map, and I have to select one of them? Is that not what generics would fix?


> Is that not what generics would fix?

No. That would be fixed by having something akin to typeclasses, traits or interfaces.

Elm's List, Maybe and Array are all defined using a generic type parameter.


No, it fixes that you don't have to chose between ListOfInts, ListOfStrings and ListOfListOfArrayOfInt.

BTW, one doesn't have to chose between them in Go either, but only because there are under-the-hood generics for the native types, that are only accessible to the language designers.


There is a terminology issue here, with "generic" being overloaded to mean different things. Parametric polymorphism is what Go is adding. You'd like to see something like ad hoc polymorphism in Elm, like type-classes in Haskell. But neither of those should be confused with the datatype generic programming.

https://en.wikipedia.org/wiki/Parametric_polymorphism

https://en.wikipedia.org/wiki/Ad_hoc_polymorphism

https://wiki.haskell.org/Generics


I think you want generics for many useful sum types (e.g. options) so you don't have to repeat your sum type definition everywhere it is used. Also, you can have options and matching without sum types (e.g. scala).


btw. scala 3 might change that. there is a language feature which can be turned on to have explicit nulls, which will be in the form of union/sum types: https://dotty.epfl.ch/docs/reference/other-new-features/expl...


Python has all of those things, including generics, so whatever order they are needed in, “Python got it” doesn't argue for one over the others.


> Of course I'm joking, but I think implicit nullability is the worst part of Go.

Go doesn't have implicit nullability. You have to declare that something is a pointer for it to be nullable.

There are a few kinds of pointers: "normal" pointers, slices, maps, function pointers, and interfaces. Any of those pointer types are nilable. Regular structs and primitives are not nilable.

    var x someStruct = nil //this will not compile
    var x int = nil //this also won't compile


When i read Go code it is riddled with null guards. Any of those null guards may be removed, code compiles without additional warnings but now may blow up at runtime.

Sorry, but to me that's a clear symptom of implicit nullability.

Edit: clarification, to me it does not show in the type signature that something may return a null, also I do not have to unpack a possible null value (like with Maybe or Option).


You can't check for nil on something that isn't nilable -- it's a compile error to do so in Go.

Your definition of "implicit nullability" is suspect here.

Go doesn't even implicitly de-reference things, which is what I think you're intending to say that it does. (EDIT: field access does cause a dereference, as was pointed out in a response to this comment. I still think of this as an explicit action when you're specifically operating on a pointer variable, but I'll concede this point here.)

Your position on this is really confusing at the moment. Go handles nullability in an extremely different way from Java.

Edit for your edit:

> clarification, to me it does not show in the type signature that something may return a null

The best way to know something won't be null is to use value types where possible. Don't return nilable values unless you have a good reason. When you return nilable values, only return nil when the error value isn't nil. This is a common pattern in Go code. Why would you return nil if there wasn't an error? If there was an error, why would you care what else was returned, outside of some very exceptional circumstances?

> also I do not have to unpack a possible null value (like with Maybe or Option).

I write Rust and Go professionally... I'm familiar with the pros and cons here, and I wish that Go would adopt Sum Types, but this has nothing to do with "implicit nullability", and in practice... it's really not a big deal. I've written huge amounts of Go code, and I just don't remember encountering nil exceptions in production except once in a blue moon -- and it is similarly common in Rust to hit an "expect" or "unwrap" that you thought was statically impossible to reach. Go has good editors (like GoLand) and linters (like golangci-lint) that make it easy to avoid most of the practical pitfalls.


> I wish that Go would adopt Sum Types

And then make a Maybe sum type? But then it is too late as the std lib communicates with nils instead of Maybes

> The best way to know something won't be null is to use value types where possible. Don't return nilable values unless you have a good reason.

I'm not doing this, I'm using an API that does! I cannot choose what some code returns, all I can do is --sigh-- add another null guard.

If it is my discipline to add null guards or have runtime explosions I consider the nulls to be implicit.

That you find as much runtime problems with unwrap/expect in Rust (which have a lot of red tape on them in API docs) as with "missing null guards" in Go, is great for you, but to me that does not make it a good design choice in Go.


> If it is my discipline to add null guards or have runtime explosions I consider the nulls to be implicit.

But they're still explicit nulls.

You seem to be saying that only implicit nulls require null guards, but that's just not what that means (as far as I have learned).

You seem to hate nullability as a concept, which is fine, but that's different from hating implicit nullability.

> That you find as much runtime problems with unwrap/expect in Rust (which have a lot of red tape on them in API docs) as with "missing null guards" in Go, is great for you, but to me that does not make it a good design choice in Go.

It's also a comment on how NPEs are just not a common hazard in Go compared to my past experiences with Java, especially when combined with good editors and linters.


> But they're still explicit nulls.

Except that there may be other reasons to use pointers than just nullability. And in that case, there's no mechanism to specify whether or not the thing being pointed to is nullable.

I think that's where the implicitness in question comes in.


Explicit to me is when I can see from the return type it is or isnt nullable AND that i have to deal with the null case explicitly (so nullableUser.lastName() errors out as I cannot call lastName() on null).

But I might well be mistaken.


> Go doesn't even implicitly de-reference things

Except when accessing a field via a pointer to struct.

https://tour.golang.org/moretypes/4


I don't like your definition of "implicit nullability". Following that logic, Java integers can't be null either, and C has no implicit nullability! In practice, when we say we don't want implicit nullability, we mean that we want nonnullable everything, including pointers. After all, Tony Hoare called null references his billion dollar mistake.


If I return a struct pointer in C, I can also return null. The return type does not communicate this to me: this is what I call implicit nullability.

It is just a bad contract, as it is often misunderstood, and it allow for runtime errors that could easily have been caught at compile time.


That's not implicit nullability, in my opinion.

I agree that non-nullable references are wonderful, and Go will probably have to grapple with that eventually. I just don't want people to get the wrong idea. Go has value types -- not everything is a reference that is implicitly nullable.

C isn't implicitly nullable, you are correct.


I feel like we're just using different definitions of implicit nullability, and by your definition, no language with value types is implicitly nullable, whereas by my definition, every language without nullable references is implicitly nullable. I think your definition is bad and uncommon, but that's not a very fruitful line of discussion.


You're referring to how some languages have non-nullable references, which is fine. That's probably a better term to be using, since it is more clearly defined and less contentious in this discussion.

I agree that Go should consider adopting non-nullable references.


Putting your rhetoric aside, you're mostly right. As someone that maintained an old Java project, a NPE was the most common type of issue that I had to deal with. The code was riddled with them and business logic often broke leaving me to deal with lots of support tickets.

That being said, implicit nullability leads to mostly human type of errors and comes from our limitation to fit all the parts of a complex system in our brains. I'm curious if there are example where machines write code and use plenty of implicit nullability without that causing any NPE issues ever.

Just a thought.


Plenty of the example you request exist. Here two of m:

Well Haskell compiles to binary, or Elm compiles to JS. The resulting binary/JS is basically machine written and does not throw any NPE's.


What do you mean by implicit nullability? Go doesn't have it. A pointer can be nil, which is obvious, but a regular string or int, or even a struct type cannot be nil (unless it's a pointer).


> A pointer can be nil, which is obvious

It's certainly common, but of the things which would be useful to be able to constrain only to allow explicit and non-default nillability, pointers are pretty high on the list.


What would be a better design?


Statically requiring a (the default case) non-nillable pointer to be assigned to a non-nil value (presumably, ultimately as a result of an operation that fails if it can't allocate, or one that either fails or returns a non-nil pointer when passed a nillable pointer) before being used?

The same thing as is done for any other non-nullable value in static type systems.


A more apt comparison is TypeScript to JavaScript, perhaps. Kotlin and Java interop at the bytecode level, but Golang doesn't have that. Gotlin would have to compile to Golang with the same shimming hairiness as TS downleveling to JS.


Well spotted. Though typescript does not fix some of the biggest horrors of JS. Interop is really good as one would expect.


I'd say it depends on how you tune your tsconfig and linting. You can really crank all the knobs avoid nearly all of the JS pitfalls


Agreed. But this is not very beginner friendly (sadly JS is more and more the first lang being taught), and is not an example of great design.


Considering that go is only really used in a statically compiled context, thats not really necessary.


Will it requires a VM to run?

Also, how fast will this language build large projects?


It will be like Go: no VM. And have great interop with Go!

> Also, how fast will this language build large projects?

Slightly slower.

I wonder what the implementation of generics will do to Go's otherwise stellar compile times. A code base heavily using generics can easily be double the compile time. Not sure how this will be sold, probably "but then simply do not use it!"


I'm made out of meat, the compiler is always going to be millions of times faster than that. Are generics actually slower than codegen plus parsing the same method body over and over with different types?


> meat

Flesh to me. (vegan) :)



Thanks for the reference. I chuckled :)


For a brief moment I read Goblin


This is a significant milestone for Go, and I'm extremely happy for the community. I didn't imagine getting here when I first started using the language, and yet here we are.

Special props to Ian Lance Taylor and Robert Griesemer for their continued revisions of drafts, and exemplary discussion with the community in implementing feedback.


As someone who hasn't followed the discussion all this time, is there an up to date example of what the generics syntax will look like?



Thanks!


Nobody will have anything to complain about once go has generics, we'll never hear about Golang on Hacker News again! It'll just be the Rust people complaining. ;)


There’s always error handling.


Error handling, lack of enums, modules, too big executatbles, lack of optimizing backend, middle class GC implementation.


Error handling - I'm a huge fan of explicit error handling. You know exactly what's going to happen.

Lack of enums - not big of a deal. Never had any issues with defining an enum-like type, which is a common idiom in Go.

Modules - it has been fixed. Are there any issues left?

Too big executatbles - I mean, ok, so what? Why is it a concern?

Lack of optimizing backend - could you elaborate on that?

Middle class GC implementation - are you referring to lack of manual tuning?




This will probably be downvoted, but I personally never felt a huge need for generics. C doesn't have them and is arguably the most successful language in history. Yes, they are convenient, but they also add a lot of complexity to the language and toolchain. I suspect that this proposal being accepted is largely due to the huge growth of the Go community - I bet the original team (in particular, I'm thinking of Rob Pike) are at best ambivalent about this proposal and were outvoted.

Personally, the proposal I was most excited about was to make ints be arbitrary precision by default. As someone who does a lot of math, this would have made Go much easier for me to use. Sadly, this proposal was scrapped a while back.


> C doesn't have them and is arguably the most successful language in history.

I'm not sure "C didn't have it" is a good litmus test for determining the value of features... C is called a portable assembler for a reason.

> Yes, they are convenient, but they also add a lot of complexity to the language and toolchain.

Conversely, proponents of generics would argue that not having them creates complexity for software developers who have to come up with alternative design patterns where generics would be a better fit.

> Personally, the proposal I was most excited about was to make ints be arbitrary precision by default.

Why would you want that? C doesn't have it... ;)


Haha, touche. Your response is fair. My point was only that some people act like any successful language must have generics, which is demonstrably false IMO. I am not opposed to innovation and ergonomics (hence, my support for arbitrary precision ints), but we should carefully weigh the pros and cons of every new proposal, especially if it increases the complexity of the language and toolchain. In my opinion, the benefits of generics are outweighed by the cost of their complexity.


Well, we know now, contrary to what was said about generic-and-Go when Go came out: Go can apparently not do without generics.

What I always found hard to swallow is that Go's build in functions sometimes did have generics. Just you as a library/application writer were not allowed to create APIs with generics yourself.


Yep, seemed weird to me how (a form of) generics existed, but was exclusive to the stdlib/language.

If generics were unnecessary, why did the stdlib/language need them? Couldn't they just have lived with, e.g., make_map, make_slice, make_chan, etc?


Exactly. It was IMHO pretty authoritarian to say "you dont need generics" and then use it yourself in the standard lib.

But then Go is not a public place: it's private domain of Google.

I decided long time ago I dont want to learn a private tool without being paid.


And a separate copy of `append` for every possible type :)


First: I ninja-edited my comment after the fact with a thought that occurred to me after I posted, so apologies there.

Anyway, yeah, no argument there at all.

I'll be honest, as an outsider that's somewhat familiar with Pike's obsession with simplicity in Go, I'm actually a bit surprised this is getting in. It does seem like the kind of high-complexity feature that was deliberately excluded from the language as part of its overall design ethos.

But it really comes down to a weighing of pros and cons. Generics have the potential (though definitely not the guarantee!) to trade off developer complexity for compiler and toolchain complexity, and the preference on that choice is a personal one.


Well, then you also know he just didn't see any good implementations of generics, yet. Let's hope this gets good!


> I'm not sure "C didn't have it" is a good litmus test for determining the value of features

It can be a good test.

One reason why C is so ubiquitous is the simple binary interface. This makes it easy to reuse code since the libraries can be imported by every other language out there. C code is relatively simple and the compiled objects follow simple binary interfaces.

Adding features to languages almost always increases the complexity of these binary interfaces. Eventually they become so complicated that nothing will ever interoperate with software written in these languages. The increased language complexity reduces the reusability of software produced in that language.

C++ had this problem and Rust is following in its footsteps. Rust libraries are really only reused within the Rust ecosystem. Those that are meant to be universally reusable will no doubt offer a simple ABI that lacks all the benefits of the Rust language.


UNIX ABI.


C may not have generic types, but much of the standard library does use generics, just through the unsafe mechanism of `void*`. Likewise, go already includes some generic code (the array type), it's just treated specially.


The equivalent to a void pointer in Go is interface{}. But very little of the C standard library uses void pointers--basically only those interfaces dealing with untyped blocks of memory, such as malloc and free. IME, you see interface{} far more often in Go code than you see void pointers in C code. And C actually has bone fide generics capabilities with _Generic, it's just very simple and manual.


go's empty interfaces reminded me too much of the bad old days when everyone passed around Objects to java methods. I hated it.


void * is very useful as an opaque pointer type. For example, it's used all over the place in collection implementations and user-defined arguments (e.g., callbacks, pthread_create).

It might not be used much in the stdlib (which is relatively small!), but libraries and applications rely on void * heavily in my experience.


For me the desire to use generics shows up once I've invested some time in making fancy types. When I work in C that point is never reached since problems are solved "the C way".

Personally I'm a huge fan of generics but I can understand how the keepers of Go might be reluctant to go down the path of C++, Java and C#.


I think that C# has a successfull implementation because it isn't just about syntax sugar. The runtime knows it. Generics+structs allows many optimizations.


So true and portable across languages.


"I meant there are no plans for generics. That's not the same as saying we plan not to do generics. It just means we don't have a plan.

The FAQ is still accurate.

-rob"

"The time has come to change Go, given what we have learned over the past decade of using it in production. -rob"

the reason this proposal was accepted is because rob and others liked this plan and said over a decade of production taught them it is an important thing to have, in fact rob asked wadler and company to help get things right for go.. so i would say you'd bet wrong


> C doesn't have them and is arguably the most successful language in history.

And many C programs just emulate them using crazy macros, that are harder to write and are less typesafe than proper generics.


I was going to say -- the way you write a lot of "generic" code in C is with macros. Coming to Golang from C and having neither C-style macros nor proper generics was quite painful.


> I'm thinking of Rob Pike

I do not remember him being against generics. He is the one who contacted his former colleague Phil Wadler to help with the theoretical validation of the type parameters proposal. In his talk about the Go 2 draft specifications (back then the proposal was still based on contracts) he was positive that a good design can be found. (*) https://www.youtube.com/watch?v=RIvL2ONhFBI


C11 added _Generic as a way to write generic functions, for data structures people use the preprocessor to code-gen instantiations of generic code.

Rob Pike has nothing against generics and never expressed criticism of them. His concern was always fairly practical; among the numerous forms of generics that exist among different languages, and the significant variation and rapid evolution of them, how can such a feature be safely added to the language?


>I bet the original team (in particular, I'm thinking of Rob Pike)

Apparently he's nervous but can think of useful applications of generics: https://old.reddit.com/r/golang/comments/jditu9/what_do_gene...

Robert Griesemer was a coauthor on the Featherweight Go paper so I'm assuming he supports it. I would assume Ken Thompson doesn't care for generics but I think he's retired so he wouldn't vote on it anyway.


PHP was also hugely successful, success is not the criterion by which this kind of feature should be decided.

The bottom line is that generics allows the production of safer, more robust, more evolvable, more documented, more performant, code.

Along with static typing, generics are simply a feature that no language created recently should be without.


A successful language requires less effort to solve your problem. That's why we create them, and it's strange how languages keep becoming popular without nailing this.


and less readable, less auditable, less understandable, etc.


These are all subjective aspects which often confuse unfamiliarity with obfuscation.

I can read generic code in multiple languages even when I'm not fluent in these languages.

Once you get used to generic code, it actually becomes easier to read and understand than code where everything is type cast or Object typed all over the place.


Readability is far from subjective, if 50% of your devs can’t read parts of the codebase then it’s not readable.

Simple generic code is fine, but generics can be abused to over engineer things.


Agree with you on the generics, they make code harder to read and the biggest selling point of Golang is that it's easy to audit/read/understand.

IMO Golang needed sum types and match statements, but it didn't need generics.


Out of curiosity, what changed between now, and when proposals for generics came up in years past?


This isn’t a new proposal. Go’s maintainers have always been clear that they weren’t against generics per se but we’re against rushing into implementing something without giving sufficient time to consider the options.

Personally I don’t think their time spent considering has resulting in anything better than if they had rushed into a solution (I’m not a fan of this proposal). But that’s just my personal opinion.


I'm not sure what you're comparing to, but there were previous proposals that were pretty different (and worse in my opinion), and there were changes along the way in the drafts that eventually turned into the accepted proposal.


Yea basically they're implementing Java-lite generics. I'm not sure what they have been waiting for exactly...


Java generics do type erasure, this proposal does not. Java generics do not work on primitive types or with operators, this proposal does. Java generics require unbounded parser look-ahead by using <>, this proposal avoids it by using []. These are just from the top of my head, I'm pretty sure there are more differences.


> Go was released on November 10, 2009. Less than 24 hours later we saw the first comment about generics.

- https://blog.golang.org/why-generics

But it's just taken that long to come up with a good design. Most of the work has been pushed forward by core team member Ian Lance Taylor who put forward the case in https://github.com/golang/proposal/blob/master/design/15292-...

This abstract proposal has been updated to link to a summary of concrete proposals at the footer, ""presented for historic reference. All are flawed in various ways""

Type functions (June 2010) - https://github.com/golang/proposal/blob/master/design/15292/...

Generalized types (March 2011) - https://github.com/golang/proposal/blob/master/design/15292/...

Generalized types (October 2013) - https://github.com/golang/proposal/blob/master/design/15292/...

Type parameters (December 2013) - https://github.com/golang/proposal/blob/master/design/15292/...

But it wasn't updated with implementation proposals after 2013, most notably

Contracts (2019) - https://github.com/golang/proposal/blob/master/design/go2dra...


The main difference between the accepted proposal and the original proposal is that the original proposal introduced contracts and the accepted proposal folds all that functionality into interfaces.


I wonder if over time, Golang will pick up more type features like Java and other languages have.

The general consensus seems to be that powerful type systems are very effective.

Personally, the low footprint runtime and concurrency primitives are enough for me and I wouldn't mind the language becoming "less simple" if it helps the ecosystem.

Once generics are implemented, I can imagine people requesting for the next "missing" thing.


> I wonder if over time, Golang will pick up more type features like Java and other languages have.

I hope it doesn't pick them up like Java did.

When Java was considering generics, there were two major proposals out there. Sun decided on easily the worst one: type erasure. Now we're stuck with it.

When Java was considering closures, there were two major proposals out there that I recall [one being to get rid of Java's broken local variable closure semantics]. Sun (Oracle? forget) again picked the worst of the two proposals. Now we're stuck with a real monstrosity.

Java has an amazing history of picking the wrong way to do things and permanently saddling developers with it.


Type erasure was the best of these options, and it's one of the main reasons why Java became even more successful than it already was, and also the main reason why there are so many languages created on top of the JVM (as opposed to .net, which supports reified generics, which complicates enormously writing languages on it, especially for interop reasons).

More details: https://www.beust.com/weblog/erasure-vs-reification/


Since type erasure was mentioned, it's worth noting that Philip Wadler (of Haskell fame) was involved both in the design of Java generics [0] and of Go generics [1].

(Perhaps I should clarify that I don't have a strong opinion on type erasure.)

[0] https://homepages.inf.ed.ac.uk/wadler/gj/Documents/gj-oopsla... [1] https://arxiv.org/abs/2005.11710


> When Java was considering generics, there were two major proposals out there. Sun decided on easily the worst one: type erasure. Now we're stuck with it.

Though using type erasure may have made the JVM a better target for other languages.


Any objective reason for thinking either feature is bad?


I think I'm ok with this as well.

The developers behind Go have a really strong culture of taking a ton of time to implement any major language changes; very reminiscent of Java and C++. Talk around Generics began, well, when the language was first created, but even more seriously like five years ago, and it'll probably be another year before it hits production.

I love this. Language changes need to be thought through considerably, with all angles considered, and by going slow it gives major developers time to give feedback, prepare, and most critically not always feel like the code they write will go out of date in three months. By comparison, writing anything in, say, Rust (and JavaScript ~four years ago, its better nowadays) feels exhausting, because its a constant battle with changing culture and evolving best practices.

My favorite feature of Go is its characteristic of not carbon-dating codebases. Go written a decade ago looks almost the same as Go written today; Contexts would be the single major pseudo-language-level feature added in that interim which may give away newer code. Adding new features is still important, balance in all things etc, and code written after generics will give another epoch of carbon dating. Go strikes this balance in a way that should be a model for every other language.


> Once generics are implemented, I can imagine people requesting for the next "missing" thing.

Perhaps. But there is (currently) no other feature that people have been whining about nearly as much as they have been whining about generics. So I think it's going to be a while before another need is felt to the same degree.


Golang is google's backup to java if oracle didn't let them have java anymore, with some additional design goals for large companies like fast build times, better memory usage and something jr engineers can pickup without much trouble.

They both have garbage collection, they both perform about 3x slower than static C and they're both mostly used for network services, which is the same as Java at google.


Go is AOT compiled, thus many many optimizations are not possible there.

Also, 3x slower than C is a baseless claim for both; at least for Java, for comparable, big code sizes, Java will easily win, because mallocs are expensive, and arenas, pools whatevers are poor man’s GCs, with worse performance, and JIT compilation can do some really aggressive optimizations, as well as heap compression, etc.


I really really hope that this does not end up in people abusing generics in Golang code and making Golang code harder to read. The biggest argument of Golang in my opinion is that it is extremely easy to audit and read and understand at the moment.


Anything that can be abused in a language will be abused.


When you get used to seeing it, the types mostly disappear from view until you look for it. I got to that level with Java, and Kotlin for the most part--never with Scala it always looked like line noise.


I'm kind of sad that they didn't allow you to use inuktitut characters as a joke about this classic comment.

https://www.reddit.com/r/rust/comments/5penft/parallelizing_...


Finally! At least one pain point less when dealing with Docker/K8s eco-system.


What does this mean for a timeline on when we can start using them?


In a stable release maybe 1.18 (Feb 2022) or 1.19 (Aug 2022). In a beta release maybe 1.18 beta1 (Dec 2021).


Very much excited that this is finally moving forward. Such sweet improvements this will allow for SciPipe and FlowBase when this nears completion [1,2].

It will make it so much easier to enable typed port objects, which can still re-use all the handy functionality for connecting inports/outports, traversing the dataflow graph, etc etc.

[1] https://scipipe.org

[2] ihttps://flowbase.org


Sad to see that we won't be getting type parameters on methods. I hope they fix the issues with it and can get that working in a future proposal.


"We don't need generics"

"We don't need exceptions"

"We don't need ORMs"


2 out of 3 right ain't bad. We really don't need ORMs


Yep, I can see at work how fantastic are those hand written sql scripts wrapped in bash to run migrations, and those magnific joins and manual mapping of dates.

That's great, until you realize you have an actual product to build.


Well, no ORM != bash.

And no ORM != manual mapping of dates. All kinds of drivers have custom adaptors for data types.

(Also a query builder is not an ORM).

>That's great, until you realize you have an actual product to build.

That's exactly the problem with ORMs. You get worse SQL generated under the scenes, with worse performance, and less control.

It's just hidden under the carpet.

Plus, if you build your product with heavy domain objects in 2000s OO-style you're doing it wrong.

And if you don't, and use, e.g. data classes and record structures, then you don't need an ORM since there's no "O" to map too.


All of this is an immense wheel reinvention, and the SQL I'm seeing is actually far worse than what ActiveRecord or Django's ORM would generate.

Also, talking about such a low level performance issues makes me think we're already talking about very different problems. Maybe you're working for Google or Amazon or a pretty performance heavy application. Then ok, I agree with you.

But my disconfort is with most startups, and most average companies chosing Go and doing all of this reinvention when what they are doing is basic CRUD for a web app that has less than 100 reqs a day.

For the use cases I've seen of Go so far, the bottleneck was caused by using it, and it's ecosystem, because the slowest part of the system is the development time and the need to rebuild from scratch a lot of things you get for free otherwise.


> And no ORM != manual mapping of dates. All kinds of drivers have custom adaptors for data types.

Yes, because that's what you want or need, your database drivers to implement custom adapters for data types.

What do you do when your app supports multiple database types? What do you do when you have to switch driver for whatever reason?

ORM/user friendly query builders are a layer on top of the driver, the driver shouldn't contain them...


However you do need a different language.


How are generics implemented under the hood? Reflection?


I really wish they went with angle brackets like everyone else does. I get the argument about not wanting to break existing parsers but this is a significant enough language change to warrant that.


It actually causes ambiguous syntax that isn't easy to solve. See: https://go.googlesource.com/proposal/+/refs/heads/master/des...

    a, b = w < x, y > (z)
Is this code doing 2 boolean compares, or is it calling a function with "w" with types x and y?


Could they not make the space (or no space) part of the syntax after the <. gofmt means that white space could be made significant... which is the lesser evil: syntactic whitespace or inconsistent semantic meaning (compared to most other languages)?

I detest how syntactically overloaded our few ASCII symbols are, and using square brackets for templates seems wrong.

Edit: Even better, use gofmt to convert each ASCII symbol into several unique Unicode symbols depending on usage, to show the semantic meaning. So [] as array is represented differently from [] as the template operator.

Edit 2: I suspect this one decision will prevent some developers from investigating golang and thus affecting uptake - choice of syntax matters.


The << / >> characters (sorry on phone so can’t post the actual Unicode characters) was proposed and, like yourself, the Go team decided it might put people off. But to be honest I actually would have been ok with it.


I know it’s not a popular opinion, but one thing I love about Perl and wished more languages adopted was the way how variables and functions could be prefixed by a special character (‘$’ for scalars, ‘%’ for hashes, ‘@‘ for arrays and ‘&’ for functions, though the latter was optional). Perl had a lot of properties that made it look a lot like executable line noise but those prefixes did help with readability in a way that a lot of more readable languages lack.

In the case of Go and the example you’ve given, if ‘W’ were a function then the code should read like this:

   a, b = w() < x, y > (z)


In the example, we can't tell if w is a generic function or not (by looking at this line of code). Remove the spaces, as would be more conventional:

  w<x,y>(z)
Is this two expressions on one line or one expression with two type parameters? Possible groupings:

  (w<x,y>)(z) // w is a generic function
  (w<x),(y>(z)) // two comparisons


The parens look like "helping ambiguous syntax" rather than "should read like this" here - if z is an argument to w, there was already a set of parens signifying the function call.


Scala uses square brackets. Never liked the look of angle brackets


Nim, CLU, Eiffel and Scala use square brackets.

D, Modula-3 and Ada use parenthesis.

ML based languages use quoted letters.

There is no everyone else.


You've taken my comment too literally. The point I was making here wasn't "angle brackets are used without exception" (clearly that's never going to be true given the rich tapestry of programming languages out there) but instead "I don't like the aesthetic of square brackets". Really it was a statement expressing a personal opinion rather than an empirical fact. Is it a little misleading? Perhaps, but that's natural languages for you (I would have edited my own post to make my point clearer but I was outside the edit window by the time I had realised how ambiguous said point was).

I have gone into more detail about why I don't like the square brackets notation for generics to one of the other replies so I wont reiterate myself here and simply suggest you read the replies before commenting.


So you know languages which use angle bracket and now wish every other language should follow that. This seems more stuck-up that Go team's alleged opinion on not implementing generics.


Woooah steady in there with the conclusion jumping.

The reason I prefer angle brackets is just because I think it’s a little more readable. For me square brackets and parentheses look too similar in long function declarations. This might be a symptom of my dyslexia but the fact remains it’s a real readability issue for me.

I actually wouldn’t have minded if they used another visually distinct character either. Like the << / >> characters that were also proposed (even though non-ascii characters have usability drawbacks when typing code).


Is generics akin to overloading functions?


no, overloading functions is called "ad hoc polymorphism", whereas generics is called "parametric polymorphism"


Nice to know.

So what's the timeline for proper exceptions now?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: