Maybe this is unfair, but it’d be great if the Go devs could just say “generics will work similarly to [C# / Java / Swift / D / whatever], except that we’ll address [problems] with [adjustments]”.
Rather than going through this whole rigmarole of resisting adding generics too early because all existing implementations are bad, then slowly reinventing the wheel from scratch, then finally ending up with something pretty similar to one of those other languages anyway.
It’s OK not to make something completely new and different. It’d be useful to explicitly say which language(s) you’re borrowing from because you can then more clearly call out the differences and explain the reasons for them.
I'm also not sure why in OOP-land, generics are this crazy experimental weird feature, when in functional languages, people figured out how to implement parametric polymorphism (the original term for generics) in quite reasonable ways. I get that subtyping adds some complexity, but overall I don't understand why such a basic way to build abstractions is so controversial in (some) OOP languages. If anyone has some context on why this is more difficult to have in Java-like languages, I'd be curious to hear it.
For instance, if A < B (B extends A),
What is the relationship betwern Array[A] and Array[B]?
If you are just reading the array, you would want Array[A] < Array[B].
If you are writing to the array, you would want Array[B]<Array[A]. If you are doing both, you want Array[A] to have no relation to Array[B].
This problem doesn't come up in ML style languages because they do not make use of inheritence.
I think it is Gilad Bracha that said, when the decision was made to include only covariance in Dart, that variance flies in the face of programmer's intuition.
He's right. It's easy to understand one level of variance: you can replace a return type by a subtype and a parameter type by a supertype. (I wouldn't be surprised that many programmer don't undestand this.)
Two levels already requires some deep thinking (assuming definition-site variance: `List<Object>` or `List<String>` as a return type / parameter type, was is allowed to replace it?
More than that? (`List<List<String>>`) Hahaha, good luck.
And by the way, Java has notations to specify the variance of type, but only at the use-site, which is different from doing it at the definition site (both enable expressing things the other can't do... but you can actually have both, as I think is the case in Kotlin, though there are some limitations).
I'm not really sure what you mean. Variance just changes what is and is not a subtype/supertype. If List is covariant then List<T> is a subtype of List<U> if T is a subtype of U. So then, by induction, List<List<T>> is a subtype of List<List<U>> if T is a subtype of U. I'm not sure what's hilariously hard to follow here...
Good point, it's a bad example because we assume we're just combining covariance which is intuitive.
Nevertheless, I have a point because:
1. Things get harder with contravariance.
2. Things get harder when you mix covariance and contravariance.
The prototypical covariance class is Producer<T> (with method produce() returning T) while for contravariance that's Consumer<T> (with method consume taking a T as parameter).
Assume each class has a superclass (Consumer0<T> and Producer0<T>) and a subclass (Consumer2<T> and Producer2<T>). Assume V extends U extends T.
Can you list the subclasses and superclasses of Consumer<Producer<U>>?
Personally, I have to think carefully about this for a minute or so, and I've been there before a couple times.
This is not an especially complex scenario either. I've seen things get worse in practice.
Ah, I can figure it out pretty easily for that example too but probably only because I am pretty familiar with variance and you chose really generous names. I personally find the reasoning about variance becomes a lot easier if you stop thinking about the definitions and start thinking about what you should be able to do. A producer of a subtype is technically also producing the supertype. A consumer of a supertype can totally consume the subtype.
99% of the time Scala programmers don't have to worry about variance. I've been using Scala for over 5 years and it's never been more than a cursory concern, the defaults usually work fine. You just have the flexibility to work with it how you want if you need to. To compare it to neurosurgery is simply disingenuous
They've been in Scala for at least a decade (iirc), with few alterations. How long does it take for "experimental weird feature" to become "reasonable solution to a problem that should be copied". Another five years?
That seems like a problem with inheritance + mutability, not inheritance on its own. After all, if B is a subtype of A, then we can make List[B] a subtype of List[A] as long as lists are immutable. Appending an A to List[B] returns List[A], what's the problem? :-) In fact some ML style languages (like OCaml) do have inheritance and it works fine with generics. Some generic types (like lists) will be covariant, others (like comparators) will be contravariant, combinations of the two will be invariant, and it all can be automatically inferred.
Which of course doesn't change the fact that imperative languages trying to combine generics + inheritance + mutability are in for a world of hurt.
The same problem happens with immutable types. Does a function of type Int -> A extend a function type Int -> B? How about A -> Int extending B -> Int? The answers to these are opposite.
def writeFirst: (xs: Array[B], x:B): xs[0]=x;
var as: Array[A] = ???
var b:B = ???
var a:A = ???
a=b; //This is fine, since B extends A.
as[0]=a //Obviously fine, since a:A
as[0]=b //This better be fine, otherwise the above 2 lines just punched a hole in our type system.
writeFirst(as,b); //If I am allowed to do the above, I should be allowed to do this.
For completeness, the opposite example:
def getFirst(xs: Array[B]):B = xs[0];
var as:Array[A]
var b:B = as[0] //This shouldn't work. Not all A's are B's
var b:B = getFirst(as) //Simmilarly, this shouldn't work.
Thanks for the extra detail, but in your `writeFirst` example I don't see a case of `Array[B]` < `Array[A]` (which in your notation is said to be `Array[A]` inherits from `Array[B]`).
Using the word "inherits" might be misleading (as might "extends", which I used). We don't particuarly care that the any actual behaviour/implementation gets re-used.
The core meaning of A < B is the is-a relationship. That is to say, a value of type B is also a value of type A.
I assume you agree agree that my first example ought to type-check (although, as the second example shows, there is an argument to be made that it shouldn't). The question is how does it typecheck?
In the last line, we call writeFirst(as,b). Here, the first parameter has type List[A].
However, writeFirst is declared as taking a first parameter of type List[B]. The fact that this works means that List[A] is-a List[B], which is the defining feature of List[B] < List[A].
If this were not the case, then we would have needed to define writeFirst with a type along the lines of:
writeFirst[X < B]( xs:List[X], x:B): xs[0] = x
Where we explictly declare that the type parameter of the list is a subtype of B. In this case List[A] could be used not because of the languages decision on variance, but because the program explicitly typechecks with X=A.
Note that this actually changes the return type. In my original example writeFirst(as,b) would have a natural return type of List[B]. However, in this new example, the natural return type would be List[A].
This second example is closer to how ML style languages work.
EDIT: It occurs to me that I was thinking a bit too functional in this. The natural return type of writeFirst is void in (most?) OOP langauges because arrays are mutable. What I wrote assumed that the natural meaning of writeFirst was to construct a new list which replaces the first element.
One other factor is that functional languages typically are theory first, implementation second, while non-functional languages tend to more often have the language features follow whatever the implementation admits. I know for Go they fretted a lot about how you'd efficiently compile generics, which is not something that comes up when you're designing System F or whatever.
But I believe that subtyping has a massive impact on generics, particularly in OOP languages where people expect to use lots of subtypes. There are all of these new questions around not only bounded polymorphism but also variance and mutability and how inference works.
Even TypeScript, which is following a lot of the design decisions already established by C# with the same person behind both, is still kinda just meandering around the design space and continuing to make changes to the semantics in new versions.
If go ever showed people the awesomeness of StandardML or Ocaml/ReasonML (and their amazing type systems), I think the language would lose a lot of users.
This is confusingly phrased. Do you mean "If Go people were ever shown ... ML"? I would totally drop Go for an ML language if any of them had a sane syntax, usable build tooling (including native, static compilation by default), and a (single) decent standard library.
A super awesome type system is worthless without the basic requirements for scalable software development.
If StandardML had even a fraction of the money behind go, there would simply be no contest.
Ocaml is billed as the pragmatic ML, but the syntax really sucks and nominally typed structs aren't nearly as good. They also have 3 competing standard libraries.
Haskell is too ivory tower. Most devs simply can't be bothered.
StandardML is that awesome middle. The language choices are pragmatic compared to haskell (mutable refs, side effects, and not lazy). The syntax is super simple and consistent (unlike ocaml). The concurrent ML extensions offer good multi-threading (still waiting on ocaml). The mlton compiler is very fast. There's only one standard library and it's decent. The big thing holding the language back is third-party libraries. If Google threw their millions at SML instead of go, SML really would be better in every way.
You know what he meant, if you want functionality comparable to the Go stdlib you are going to have to choose between Core or Batteries, then for async do you choose Async or Lwt? And as said the build system (when you're used to Go) is poor. I like a lot of what OCaml offers, but there's a lot of friction.
> if you want functionality comparable to the Go stdlib you are going to have to choose between Core or Batteries
He is talking about SML here.
Care to elaborate, what Go has to do with Core or Batteries? Both are mostly container libraries, and you don't need them since most of the useful staff is in stdlib.
Check revdeps in OCaml, nobody uses batteries, and nearly nobody safe JS uses Core.
XML parsers, Lwt, servers are in separate packages, which is the only and right thing to do.
>then for async do you choose Async or Lwt
Lwt, it's a non-question. Async is for JaneStreet only.
> And as said the build system (when you're used to Go) is poor.
Dune [1] is far superior to Go's build system. Extremely fast, composable, supporting packaging and os-dependent configurations, extremely easy to config. No abomination like "import github.com/package" or "//go:" in your code.
Go build system doesn't have even a fraction of nice features dune have. For example I could `dune utop ./path/to/my/libs` to build my libs and run a nice repl to test them.
Go doesn't even let you to configure your warnings precisely.
SML gets this right in my opinion. If I create a record `{foo = "abc", bar = 123}`, I can pass that record on to ANY function that needs a record that looks like {foo:string, bar:int} fields because it looks at the structure rather than the type of the record constructor.
Another nice property of the SML approach is that you don't need named arguments. Instead, pass a tuple with names (aka a record). One set of syntax rules covers both cases. I also rather like the ability to access record fields with the hash syntax (eg, `#foo myrecord`) when I don't want to destructure.
You are confusing records with structures, it seems. Structures exist in module language. Records are nominal in SML as well. Access functions for tuples and records are a dirty ugly hack build-in for convenience, they have nothing to do with structural typing, they are inferred in place.
OCaml has structural typed records, they are called objects.
let obj = object method pi = 3.14; method name = "Pi" end
val obj : < pi : float ; name : string >
is structurally typed record.
> I can pass that record on to ANY function
No, you can't.
#foo { foo = 42 }
works, but
fun f x = #foo x
doesn't. It's an ugly hack, typing is still nominal. A hack, just like SML's arithmetic op overload.
Compare it to OCaml, which have proper structural typing for objects and raw polymorphism
let f x = x#foo
f : < foo : 'a; .. > -> 'a
Edit: sorry, indeed typing is structural since you can write `{x:typ} -> typ`.
Anyways OCaml have structural typed records and raw polymorphism and subtyping support for them.
> Another nice property of the SML approach is that you don't need named arguments. Instead, pass a tuple with names (aka a record).
What's your point? You can use records as arguments in OCaml as well.
> I would totally drop Go for an ML language if any of them had a sane syntax, usable build tooling (including native, static compilation by default), and a (single) decent standard library.
F# seems to meet all of that except that native static compilation isn't the default (but is available).
I have found Rust to be a nice middle-ground for this. Sure, the borrow checker takes some getting used to, but it features an ML-style type system (though I still miss some extensions to Haskell's type system available in GHC), rock-solid tooling, and a comprehensive well-documented standard library. The high quality of third-party crates also surprised me. Whether the C-like syntax is sane is debatable though.
Yeah, Rust is the best ML for the things I care about, but after 5 years of on-and-off use, I still haven't adapted to the borrow-checker and a GC is just fine for the applications I write. And learning curve is important too--I need to be able to onboard new developers quickly. Go simply offers the better tradeoffs today for my apps. If someone built a "Rust-lite"--Rust with Go's runtime or Go with Rust's type system (less its lifetimes and borrowing semantics--insofar as those are considered a part of its type system), that would be my primary app dev language. But it's looking like Go is going to get there first with its proposal for generics and hope for sum types.
> Easily visually scanned and parsed with little mental overhead.
Sure, I'm asking for actual traits, which makes something easily or hardly parsed. The only languages that I find more readable than OCaml are Ada, Pascal and SML. What causes mental overhead in OCaml/SML syntax for you?
Yes, I do. Especially when you start considering multiline examples with flow control. There are probably a couple of factors at play:
1. Familiarity. Like (presumably) most programmers, I'm familiar with languages in the C syntax family. Of course you can protest that this is a subjective criteria, but little good that will do you as you try to convince your colleagues to adopt (what they perceive to be) your pointlessly cryptic language for the next project.
2. Visual structure is important and while OCaml's minimalism makes for elegant parser algorithms, it works against human psychology (or so I strongly suspect).
Look, I want to like OCaml/SML. I think the type system is a step in the right direction, but the type system is just gravy and the practical concerns--the fundamentals--are neglected (as much as you may protest to the contrary).
> Yes, I do. Especially when you start considering multiline examples with flow control.
Could you show an example?
> it works against human psychology
What minimalism are you talking about? It uses nearly the same notations as mathematicians used for decades. Most of the constructs are very like to those in Python.
I can't think of a popular OO language where they are considered a crazy experimental feature. One historical reason for a certain reticence early on was people were unhappy with C++ templates.
Pretty much all syntax is unreadable if you don't know the lingo. For example, try presenting the ubiquitous
for (int i = 0; i < 10; i++) {}
to ten random people without prior programming experience and see how many of them can correctly tell you what all that means. Similarly,
{ a, b in a > b }
is probably not very clear to people who don't write Swift and perfectly lovely closure syntax to people who do.
There's language syntax that's actually unreadable, for instance due to using names that obscure or otherwise don't clearly express what a construct is/does or using the same operator/keyword for too many different context-dependent purposes. I don't quite think the sheer presence of angle brackets makes code unreadable, any more than the sheer presence of curly braces or parentheses do.
> generics can be abused to make the code unreadable
All features can be abused to make the code unreadable. E.g. all modern languages have regular expressions in their standard libraries, despite complex regular expressions are essentially write-only code.
If you tell me, for example, that Go generics are going to be like C# generics, then I have to be familiar with the full semantics of C# generics. Essentially you tell me that in order to understand X I have to understand Y first, that's not good. Consumers of your language are not, for the MOST part, PLT nerds.
I’d compare it to e.g. the work on async/await in Rust, where the discussion seems more directly “we like this feature that C# pioneered and JS has adopted, here’s how we plan to adapt it to make it work well with Rust.”
Admittedly, the Rust async/await RFC and the Go contracts proposal both discuss prior art in sections towards the end, so they are actually similar in that respect. Maybe it’s really just a question of tone and messaging, and the particular discussions that tend to end up on HN.
Fair point, however I think that you're mistaking language design discussion in a closed group of PLT invested people, and a blog post targeted at general audience.
The async book is in the process of being re-written; I wouldn't be surprised if a comparison to JS ends up in there, given that we have some significant differences and it really trips up a lot of people who come from JS.
How is your suggestion not equivalent to "be familiar with full semantics of C# generics"? How does that help me with being productive using Go's implementation of generics, which has fundamentally different type system and syntax to begin with?
You're asking why general knowledge of generics and the pros and cons of various implementations would give you better understanding into the specific trade offs made by Go's implementation? You can't imagine why?
The big advantage is that PLT nerds (Who else are you going to learn a programming language from?) already know the ugly, hairy bits of Y and how to work around them. The alternative is for X to invent new ugly, hairy bits.
> then I have to be familiar with the full semantics of C# generics.
Presumably, by the time the feature ships, the golang.org/doc entry on generics will not consist of just 'lol, they are just like C# generics, docs.microsoft.com, chum'
It seems like you’re objecting to the messaging (not referencing an existing language, which doesn’t work for people not familiar with the implementation of that language) and also that, at its inception, Go didn’t pick (randomly?) a language off of which to model its generic implementation. Am I misreading?
Well, there doesn’t seem to be a lot of discussion of alternatives at all, so I can’t really see what design and implementation decisions are actually being made. Most likely I’m just looking in the wrong place, and those discussions are taking place elsewhere.
I guess my complaint is that this talk, like all the other updates on their progress on generics, implies that Go generics exist in a vacuum, whereas in reality there’s a ton of prior art that could usefully be referenced.
Edit to add: this particular talk is focused on syntax details. Those aren’t unimportant but they’re a small part of the whole picture. As I commented on the detailed Contracts proposal, the decision to add contracts rather than simply using interfaces (as in Java and C#) seems significant but isn’t explained.
It didn't all hit HN, or perhaps more accurately, it probably did all hit HN but didn't all get upvoted because how many times does HN need to chew on it in a month?
I think HN would chew on it every day if it could! Luckily it doesn’t bubble up to the front page quite that often...
Edit to add: thanks for the link! I now see that the full Contracts proposal includes some sections towards the end that address my concerns, eg “Why not use interfaces instead of contracts?”
As jerf mentioned, there has been lots of talk; every single time the question of generics is raised here, /r/programming, or any Go-specific forum, it's addressed.
With respect to interfaces vs contracts; interfaces are about runtime polymorphism while contracts are about compile time polymorphism. This is an important difference when you consider []interface{Foo()} vs Slice(contract{Foo()})--an instance of the former can contain elements of varying concrete types while an instance of the latter can only contain elements of the same type. The other important detail is that interfaces only abstract over a single type while contracts support multiple type parameters (a single contract could specify a visitor type and a vistee type, for example).
Referencing existing work is not an rare or fanciful concept. If you're not familiar with prior work that's referenced in a paper then you can research that prior work as well. If prior work has not been referenced or addressed its usually considered a flaw.
> if the Go devs could just say “generics will work similarly to [C# / Java / Swift / D / whatever], except that we’ll address [problems] with [adjustments]”.
This is basically how C++ was designed, and it turns out not to work very well; the [adjustments₁] for [feature₁] turn out to introduce not only unanticipated [problems₁] with [feature₁] itself but also new and previously unimagined [problems₂] with [feature₂]. So the Golang designers prefer to take a much more cautious approach than the Lumbergh approach you're suggesting. So far it seems to have worked out well — the language is not without its compromises, and it's substantially more complicated than it was at first, but it's a very reasonable compromise.
Well, consider C++ inheritance combined with object nesting; you get slicing copies.
Or constructors combined with static objects; you get the static initialization order fiasco.
Or constness with template functions; you get two, four, or eight copies of each generic function in your source code according to which things are const.
Or (compile-time) overloading with (run-time) overriding; you get C++’s weird “hiding” rule about the other overrides you didn't override.
Or separate compilation with implicitly instantiated templates; you get geological build times as the compiler instantiates the same templates in every .C file and then throws away all but one of the identical instantiations at link time. (To be fair, this is far from the only reason C++ compiles slowly.)
Overriding combined with type conversions through implicitly invoked constructors (and implicit referencing and implicit casting to const) gives you annoying bugs that are unnecessarily hard to figure out.
Cleanup from exceptions via RAII combined with C’s unspecified argument evaluation order led to a situation where resource leaks during certain kinds of operations couldn't be avoided reliably, a bug in the language definition that wasn't noticed for several years, though I think it's fixed now.
The grammar is undecidable because of the number of different things that have been added, which sounds like a hyperbolic joke but is actually literally true, and a significant obstacle to implementing something like gofmt for C++.
The combination of template parameters using <> for parameters, the traditional longest-leftmost tokenization rule, and the >> operator for bitshifting made foo<bar<baz>> an unexpected syntactic pitfall, one that is now fixed.
There wasn't an exception-safe version of the STL for a number of years, which isn't really an interaction between templates and exceptions —a non-template container had to deal with the same exception-safety problems— but it did mean that fit quite a while you could use the STL or exceptions but not both.
This is far from the extent of the problem. The C++ FAQ consists largely of affirmations of the form, “Doing [reasonable thing 1] works, and [reasonable thing 2] works too, but if you do them both, you will die horribly for your immorality.” It's exaggerating about the death part but the surprising problems are quite real.
I don't hate C++, and I think it's the best existing language for some problem spaces, but it's very much the poster boy for unexpected problems arising from interactions between features.
I generally agree with that characterisation of C++, but it’s not at all what I was trying to get at by suggesting that Go generics should copy ideas from other languages.
It’s good that they’re being careful and conservative; but I feel that they’re going out of their way to avoid all existing paths, whereas it would actually be safer to use some existing system(s) as a starting point, as both the advantages and disadvantages -- in practice, not just theoretically! -- are known.
I definitely agree that interactions between features need to be carefully thought through. One likely wart with the current proposal is that user-defined generic types still won’t look or behave quite like the built-in ones, as those have special syntax and special operators. Will it be considered good practice to expose raw slices, maps etc in APIs? Or will people start wrapping them (just as in Java, an array would most often be wrapped in an ArrayList for convenience)? This might have been easier to resolve if generics had been thinkable much earlier in the language design.
Oh, to that point I agree. I feel like that's not too far from what they're doing, really.
I agree that, with this design, it won't be as comfortable to use generic containers from a library as to use the built-in containers. That's been a persistent problem with C++ templates and Java generics, too, though (I think reasonable initializers for vector finally landed in C++17?) so maybe the lesson they took was that the base language should have slices and maps because there was no known reasonable design that allowed them to be seamlessly defined in a library while supporting static typing and object nesting? Maybe there is a solution if you design the language around generic containers from the beginning—have you tried D?
If you're just going to copy another language there really isn't much of a point in making a new language. Go is not java, or c# or rust.
Part of the explicit goal stated by the go team is that generics must still feel like go. If you slapped java generics onto go it would not feel like go.
For my own benefit, I agree with this. I just want a more usable language now. Something like ad-hoc structural typing as in TypeScript with aliases seems close.
For the future of languages I appreciate the clean slate effort. Go has been about scoping out a use area sticking a leg there and coming up with something that works well there. At the same time I dont think there will be anything revolutionary, just something that seems simple and compact. I wish them great success and us a short wait.
Generics with Reflection was the first gut check I experienced while working after completing college and getting an ASP.NET job. Like 2-3 weeks into said job.
Had to do an application that dealt with 4-5 forms (can't remember). Like 300 fields on the bigger ones, smaller ones 50ish. I started out on one that was mid-200. I'm coding the way I was taught in school: object set object properties based on txtWhatever.Text. Like 240ish times. I can't remember how many lines of code were in this code behind, but it was substantial.
I turn it in, it works. Go me, let me go on to the next one. The senior guy on the team does one of the smaller forms. But he uses generics and reflection to basically iterate over all of the fields on the form and sets them to the ephemeral properties on this generic object in like 8-9 lines of code. Then there was the rendering code that was vaguely similar.
Added bonus: 99% of his code behind could be copy and pasted over to the newer forms and handle all of the work related to getting/setting form values. After the code proved to work for a few weeks, replaced the massive code behind with call to his code (passing in the form object and the generic to be set and used later) with the same result and one area to manage code and it isn't an insanely large code behind file (got better with not one filing everything in time too).
Not saying generics and/or reflection are a silver bullet (I don't feel they are. Rarely so do they end up being the thing I go for), but it was definitely eye opening that straying from "see spot run" code could be advantageous. And that I wasn't "good to go" already.
Chiming in here as a Swift dev - I find its generics system incredibly helpful and end up writing something that uses generics about once a month. To those Go programmers who think they will never use them - it’s worth a little learning, and once you do you will find more ways to use them to make your code more applicable.
Chiming in as a fervent Go dev that's also a huge fan of generics: Generics are awesome, but always seem to add a ton of complexity. Everyone starts with just "oh, `func Reverse(slice []T)` is so obvious" kind of thing, but that's like saying `fmt.Println("Hello World")` is simple. It's simple cause you are doing a simple thing.
That said, I do want generics to come to Go, just... with an emphasis on what Go is, not just "here, let's replicate X's generics in Go". I am interested in the contracts implementation.
> Generics are awesome, but always seem to add a ton of complexity
The team I work in use generics all the time and I'm not sure what complexity you are referring to. Care to elaborate? Is it some edge cases or are you talking about from a compiler perspective or something else?
To me, not having generics is like saying let's skip handling bools and just store them in strings as "true" or "false". It's such a weird thing from my point of view.
Compiler perspective (fast compiles are a core part of Go, IMO), various tradeoffs(fast compile? fast runtime? binary size? Pick 1 or 2), and seeing code bases that get out of control with generics and such.
Note that, for all of that, I'm still in favor of adding generics in Go, as I do see the value they can add. I'm just pointing out that "bashing" on Go devs on not knowing what generics are or how they can be useful is both non-productive and probably generally wrong. Plenty of us are saying "Man, if I had generics, this would have been less code/cleaner/more reusable!", just the other various tradeoffs outweigh the actual cost of using a different language. Go's ability to drop a very junior dev (or just one that has never use Go) into a codebase and have them be almost immediately productive is incredible.
I think TypeScript is a good example of one extreme. You've got conditional types, literal types, type mapping, and a bunch of other very advanced parameterized types.
(I also think it makes sense for TypeScript given JavaScript idioms; it mainly falls over when the caller has to interpret some very complicated error messages, however)
> To those Go programmers who think they will never use them - it’s worth a little learning, and once you do you will find more ways to use them to make your code more applicable.
There was a story I came across once. It went something like this:
The proponents of every gizmo think their gizmo is superior to the others, because it's got these useful features that the others lack, and isn't encumbered by the weird useless stuff those other languages have.
If only they spent time to understand those "weird useless features"...
What is it when you know about the power of features of the "higher level" languages, like generics, but still chose the "blub language" because of other reasons, like ease of hiring devs?
or just "Better has multiple dimensions", and not all of them are "Theoretically purer" or "look how succinct I can write this code". "How hard is it to hire devs?" is just one of those dimensions.
Indeed. I remembered some aspect of directionality, but couldn't remember how to present it in as compelling a way.
(Also, I don't really agree with some objective single value gradient for programming languages. As for anything with multiple dimensions of value, it's not sortable)
Yeah, I think the less directional presentation is probably better. While we can certainly fit things to various hierarchies (approximately or exactly), enough of programming extends beyond pure expressivity (especially into tooling, social, workflows) that it's easy to not get how useful something can be without the relevant context. (This is something that I'm trying to remember, working in JS now after a few years of professional Haskell.)
From a psychological perspective, the non-directional presentation is also more likely to leave people less defensive and more receptive to the thesis.
The decision to overload parentheses to express generic type is the only part of this spec I find nauseating. It doesn't scan well, and doesn't have a precedent in any major generic implementation I'm aware of. <, [, @-annotation... I don't care. Just don't mush it into the function declaration with the same syntax that encloses parameter.
Agreed, specifically it creates ambiguities in parsing (for computers but worse: for humans). Given Foo(x), you can’t tell if it’s a function call or a generic type without knowing what x refers to. You need context from afar to disambiguate.
You can tell if it’s a function call or a generic type based on where it is used; the local context. Function calls are either statements or expressions. Types appear in declarations.
That's a fair point, but there are still times when being explicit is useful or even required, and it doesn't hurt to use syntax that disambiguates easily.
When a function returns a function, you rarely just call it immediately in Go. But even if that were common (and maybe it would become common with generics? I'd have to think that through) it is uncommon to have a type name as unrecognizable as 'x'.
I've always been a big advocate for unambiguous, clear code, especially in Go, but I don't see a big potential for confusion here.
I don’t want to overstate the problem; I think it is a bit less readable. While types are rarely so indescriptive as “x”, they are often still plenty indescriptive, and I don’t know why we should introduce a syntax that depends on the clarity of type names or the frequency of other potentially ambiguous constructs when we could use a syntax that is unambiguous with no caveats and friendlier/more familiar to the larger body of programmers who have not yet tried Go.
I'll give the 'why' rather than the 'what'.
In most mainstream languages you take a List<Int> and substitute for List<X> without changing the implementation of List.
But there's another substitution you could have made, which is L<Int>. HKT allows you to parametrise L over Int, in the same way you were able to parametrise List over Int.
Higher-kinded types. Generics that take generics as parameters. They allow things like writing a function that both takes and returns any container type you can map over with different (defined) element types. Given how hard arity-0 generics have been to get into Go, I suspect that the chances of HKT making it are slim to none (but I think that the grandparent knows that and is just being funny).
Generics are to types what HKT are to type signatures. In functions, they usually map to arity. That is, an arity mismatch is are a higher-kinded type mismatch. An easy (if not totally accurate) way to view them is like doubly-generic types.
If you have some type A, it has the kind * . If you have some function from A -> B, it has the kind * -> * .
I've decided that it's the same as having a simply-typed lambda calculus with one type (*) and function types living inside of your type system. Is this correct at all?
Personally, I prefer the current interfaces-based solution to generics. It's a little verbose, but keeps the language simple.
However, I think that the people we should be listening most are the ones developing huge projects in Go, like Kubernetes. Would having generics with this new contracts thing make it easier to develop and maintain e.g. Kubernetes? I'm truly curious.
It is a hack. The article lists a whole series of things that are standard in other languages that you can't do in go. Here are some examples:
Find smallest/largest element in slice
Find average/standard deviation of slice
Compute union/intersection of maps
Find shortest path in node/edge graph
Apply transformation function to slice/map, returning new slice/map
And here are data structures that other languages have but go does not:
Sets
Self-balancing trees, with efficient insertion and traversal in sorted order
Multimaps, with multiple instances of a key
Concurrent hash maps, supporting parallel insertions and lookups with no single lock
If it were not a hack, then go would not have these limitations.
Many smaller projects would benefit too. I would like to build a typesafe tree for an efficient sorted map, and to be able mergsort over multiple trees of different types. It would allow some extremely useful channel combinators for doing rather common things like safely shutting down a service with some background processing. Often these things leak and a often I find myself spawning more go routines just to map between types or to stop generic interface types infecting the rest of the API.
I have been vigorously pro-generics since the beginning, and this is the reason why: I want generic data structures. Arrays&slices and maps are great, and they really are the 90/10 solution a lot of the time, which is precisely why putting direct support into your syntax for the two of them is so very, very popular, but that other 10 comes up.
Plus, there are some generic data structures that will really work well in Go, like, for instance, an immutable tree. Granted, it'll still take some care to use properly in Go as it does not have "const" or anything like it, but it can still be done. The problem I have is not with accidental mutation, but that I just don't want to sit there and implement the immutable tree code. (Trees are great, but they're really tedious to write in the best of times, and nightmares to debug in the worst.)
I'm not terribly interested in trying to jam functional programming into Go; I may make light use of map/filter/reduce but even if this was fully implemented it would still be a fairly unpleasant experience (function that return "a value and an error" aren't much fun to map and can't hardly "chain" at all). But I've missed being able to just grab a particular data structure a few times.
Oh dear. Personal attacks are not ok on HN, regardless of whom you're attacking. Please stick to the site guidelines, no matter how passionately you feel about the Liskov Substitution Principle.
It’s fairly difficult to come to terms with an “everything was going pretty well until...” moment and doubly so when it can be personified.
Once we broke LSP, a cascade of similar ideas started showing up with regularity, especially in J2EE. I used to be able to go on at some length but now it’s more “I hope I never meet this guy because it’ll be awkward as hell.”
> The only truly unforgiveable one is UnsupportedOperationException. The guy who wrote the collections API for Java didn't know the first thing about the Liskov Substitution Principle, and gave the world an implementation that violates it...
The implementation perfectly follows it because the LSP simply requires identical behavior, and the contract clearly states it may[1] throw an exception. All the implementations "may" throw an exception. Sorry, but they correctly lawyered the LSP.
> I came up with around 20% more than the selected design.
But `add` can throw 5 different exceptions depending on restrictions your implementation wants to place on a given collection, does it allow for all that? In terms of making an interface that was small and reusable by many projects, runtime exceptions are a pretty good compromise for a standard library.
Yes, those are the same behavior. Both of them "may" accept it and "may" reject it, and "may" includes both "always" and "never".
It is correct to the letter of LSP, but not the spirit of it, and this was a conscious decision; contrary to your original claim, they understood the principle perfectly well.
Of course they do. Not sure why it’s something to brag about or not to brag about though, it’s an implementation detail. Scala for instance uses custom classes for each size set up to a certain threshold.
Kubernetes is a transpiled Java project, so yeah, Java-like features would definitely make it easier :P
I think idiomatic golang works pretty well without generics in most cases, the big problem for me is that functional programming is essentially impossible without them.
It was originally written in Java and lots of its Go code is a strange mix of the two (Java-like idioms in Go). This might be where the term "Gova" came from.
If you read the article more closely, it says that Borg was written in C++. If you watch the video linked by @tuvan below, you'll hear the speaker (who is a contributor) mention that the original authors of Kubernetes wrote the first version in Java, which was rewritten in Go.
Agreed on keeping the status quo, adding generics makes it really easy to write bad code and increases mental overhead. Simplicity is what we should be going for, not having generics should've been kept as a selling point.
This may be practical from the point of view of 'top level applications' in Go. But there is a lot of murk in pure Go implementations of databases, and datastructure heavy libraries that could be avoided and lead to better testing and less duplication. If you find it simpler to avoid generics that ought to be possible under this design.
Generics and interfaces go hand in hand IMO, especially in an application with heavy emphasis on data persistence. This way you can compose a class of some interfaces, pass it to a persistence layer (using a generic constraint of something common to all inputs of that layer), and then the persistence layer can check for all the interface types it cares about saving and pass those pieces off to whatever lower-level piece of code handles that one thing.
Generics enable you to avoid having 10 slightly different implementations of the same thing (even if dependencies are shared, you're still going to have 10 copies of plumbing without generics). Being wise about when and when NOT to use generics is to me the most important factor.
In our shop we tend to not introduce a new generic class/method until something gets painful (unless it's plainly obvious from the beginning). Because of this we usually begin to notice and reason about patterns in our code that seem common enough to warrant refactoring and whether the complexity of a generic implementation would be worth lowering the maintenance burden of copy/pasting said pattern all over the place. It's a case by case decision.
Having generics available at least allows us to make that decision for ourselves.
The examples I’d like to see worked out fully, which he mentions in passing, are encapsulation of concurrency patterns (parallelization, work queues, cancelling, etc.). It’s not that big a deal to write Max as a function or even inline, whereas there are serious practical pitfalls and common mistakes when doing real work with channels. It would be a lot more effective to fix these by writing library code rather than writing Gophercon presentations.
There must be an answer to this, but I've never seen a good response to why they aren't implementing parametric types and (single-parameter) typeclasses instead.
Simplicity? Yes, but generics are hardly any simpler.
Perhaps I'm misunderstanding, but I would have thought go already implements typeclasses and this proposals extension to the interface semantics is a parametric typeclass?
It might be fun to share my viewpoint from an angle that's probably fairly unique.
I've started a number of large, highly deployed Go projects: Terraform, Vault, Packer, Consul, Nomad, and numerous libraries and other things. I started a company that employs hundreds of full time Go developers. Go has been one of our primary languages since Go 1.0 (and I used it prior to that).
Let me start by saying that there are _definitely_ cases where generics are the right answer. Usually when I talk about generics people tend to assume I disagree with the whole concept of generics but I certainly do not. Generics are useful and solve real problems.
Adding this paragraph after I already wrote the rest: this whole comment ended up sounding super negative. I'm voicing concerns! But, I think that the design proposal is exciting and I am interested to see where it goes. There are definitely places generics would be helpful to us, so please don't take my negativity too strongly.
## Technical Impact
Having written these numerous large, complex systems, I believe there are less than 10 instances where generics would've been super helpful. There are hundreds of more times where it would've been kind of nice but probably didn't justify the complexity of implementation or understanding.
This latter part is what worries me. I've worked in environments that use a language with generics as a primary language. It's very easy to have an N=2 or N=3 case and jump to generics as the right way to abstract that duplication. In reality, the right answer here is probably to just copy and paste the code because the knowledge complexity of using generics (for both producer and consumer) doesn't justify it, in my opinion.
As a technical leader, I'm not sure how to wrestle with this. Its easy today because generics just don't exist so you have to find a way around it. But for a 150+-sized org of Go developers, how do we have guidelines around when to use generics? I don't know yet. I guess that bleeds into human impact so...
## Human Impact!
Something that is AMAZING about Go today is that you can hire a junior developer with no experience with Go nor any job history, have them read a few resources (Tour of Go, Go Spec, Effective Go), and have them committing meaningful changes to a Go project within a week.
I don't say this as a hypothetical, this has happened numerous times in practice at our company. We don't force or push any new hires to do this, but Go is so approachable that it just happens.
I love it! It's so cool to see the satisfaction of a new engineer making a change so quickly. I've been told its been really helpful for self-confidence and feeling like a valuable member of a team quickly.
The Go contract design document is about 1/3rd the word count of the entire Go language spec. It isn't a simple document to understand. I had to re-read a few sections to understand what was going on, and I've used languages with generics in a job-setting and have also been a "professional" Go dev for 9 years.
So what worries me about this is technical merits aside, what impact does this have on learning the language and making an impact on existing codebases quickly?
I really like what Ian said about attempting to put the burden of complexity on the _author_ using generics, and not the _consumer_ calling that function. I think that's an important design goal. I'm interested to see how that works out but I'm a bit pessimistic about it.
---
I have other viewpoints on generics but those are the two primary ones that stand out to me when I think about this proposal going forward.
"Having written these numerous large, complex systems, I believe there are less than 10 instances where generics would've been super helpful."
If you wrote Python, you'd probably be saying, "there are less than 10 instances where type declarations..."; you could make similar statements about concurrency or a host of other features.
And you'd be more or less right.
Here's the deal: a language with generics is very different from a language without generics. You write different code, you solve problems differently, you think differently. This is why they should have had generics in version 1.0 and why adding them later seems so underwhelming to some of us. Generics have a lot of advantages, but you are going to be looking at a bizarre mixture of programming styles for a long time.
> The Go contract design document is about 1/3rd the word count of the entire Go language spec.
They're quite different documents, so I think this is somewhat unfair and misleading. The go spec doesn't spend prose on explaining rationals, alternatives, historical references, etc.
I see the use for generics all the time. I came from Ruby, where we have enumerable so you get standard constructs like map, find, first, last, each_cons, etc. Because Go lacks them, people constantly reimplement these things with less intention-revealed code using ad hoc for loops. Ugh.
This reflects my experience with Go to a tee. (Well, minus being a prolific library contributor, thanks for that!)
Prior to using Go professionally, I scoffed at the language and wrote it off as an extreme form of Blub paradox. Having worked in languages with generics, as well as languages with advanced type systems (Haskell, Rust, Scala), it seemed like a huge step back.
Initially, I did have a problem with the lack of generics, because I leaned on the feature regularly when writing software. Four years of professional use later and I can say that I am very much glad for the lack of generics. It is almost always straightforward and easy to read and understand Go code that someone else has written. The same cannot be said for the other languages I've mentioned.
For the problem domain we are using it in (devops), it has been a godsend. The company makes use of many different languages from various paradigms, yet anyone can pick up Go quickly if they want/need to contribute to or deeply understand our tooling.
Mixed feelings about this. One of my favorite things about Go is that it's a small and relatively simple language. It's a "fast enough" language without the steep learning curve of Rust. Too many features that make sense individually could change the calculus to "might as well use Rust".
I'm not so sure about the "steep learning curve of Rust" stereotype anymore. If you are comfortable with low-level programming languages in general, in a few days you can get yourself started fairly quickly with Rust to the point where you will still battle with compiler re: borrowing and related things, but will be able to write functional code. Coming from pure Python/JS will be tough, but it will be the same with any other low-level language.
My work is a Ruby/JS shop, and we have some Go in production as well. We definitely need a little hand holding to ramp anybody new up on the Go codebases. In many cases our devs haven't used types, pointers or any equivalent of goroutines & channels. I've written just barely enough Rust to say there are even more concepts that would be new to my team.
Even with Go being comparitvely simpler, I could go either way on whether it was the right choice to add it to the stack.
To be honest, this response feels a bit aggressive and dismissive to me.
Many of my coworkers (myself included) entered the profession through bootcamps. Others came up during the PHP/Wordpress to Ruby/Rails era of web development, and never had a reason to learn languages like C++ or Java. I don't think there's anything to make fun of, or apologize for in this scenario.
The shop I'm part of is staffed by bootcampers or fully self-taught people. Not a degree in sight. They have to be passionate about code/computer/the web/technology/math/some associated subject to thrive, but when they are, it seems to work out. I'm lead to believe this is the relative norm for web development.
Bootcamps do not teach things like pointers, type-theory, or big-O notation. I'm not sure what they do teach, I'm self-taught and via MIT Open Courseware + other net resources, have learned a lot about those, but I have no idea how that compares to either bootcamps or modern CS courses. Not all of my colleagues know about pointers or type-theory, but the senior ones do.
I hope this doesn't happen. I love golang because it is simple and easy to read. Both will go away when generic programmer start to write unreadable meta-programming class which "you don't need to understand, just use them".
I admire golang devs for being opinionated and stand up for the core lines of their language so far.
I don't think it's as clear-cut as you make it seem. Generics greatly reduce the amount of code we need to read in exchange an slight increase in abstraction. In my opinion, the tradeoff is worth it, especially given that Go already has abstractions that are more difficult to grok than generics, which, all things considered, aren't that hard to understand.
May be you could share feedback with us and the designers about which parts you think will be hard to understand? This is a valid concern but only way to address this is to specify and share as feedback to the proposal.
I wonder if there's room for a language that is small, allows for nearly limitless abstraction, and still has great tooling. Go is (or, you could argue, was) small, and now has better tooling, but is just beginning to increase its ability to create abstractions. Common Lisp is large (only 200 pages fewer in its spec than C++, if I remember correctly), has unparalleled tooling (like the don't-unwind-the-stack debugging and SLIME), and really good abstraction power. Scheme is small and consistent (unlike CL), and also can create similarly advanced abstractions, but is missing some of the tooling that CL has enjoyed for eons.
I'll be really cool to see a language that feels consistent, tight, well-engineered, and small; can create powerful abstractions similar to what can be achieved using CL macros and CLOS; and has an awesome debugger, editor interop, interactive programming with a REPL, build tools, etc.
I fleshed my comment out into a blog post yesterday, which got some interest[0] on Lobsters. It appears that modern Scheme is easier to write portably and an even nicer compromise than I had originally thought.
Scheme is possibly the best designed language I've ever seen. Which may be why no one uses it.
(Note: Macros, even Scheme's hygenic ones, can produce horrible monstrosities. Like loop. And the urge towards object systems gives Scheme a horde of them, mostly bad. This may be a lesson in getting what you ask for.)
Go has generics. What it actually lacks is user-defined generics.
Which shows the absurdity of the situation. The fact that it has generics demonstrates that generics are a useful and important feature. And yet they think their provided generics cover every possible use-case of generics that you will reasonably need in to use in Go.
And yet, the core devs have never said they don't want generics, only they didn't want a poor implementation.
Also, there's a bit of hilarity about commenting this on a blog.golang.org where the first sentence is "This article is about what it would mean to add generics to Go, and why I think we should do it." being posted by a core Go dev....
As a functional leaning programmer, I think it's striking that the resulting draft is essentially a curried function whose first parameter is the type!
func Reverse (type Element) (s []Element) {
first := 0
last := len(s) - 1
for first < last {
s[first], s[last] = s[last], s[first]
first++
last--
}
}
This is later called like:
Reverse(int)(s)
Which if this were a curried function, would also be callable like:
Reverse(int, s)
Now I wonder if facilities for currying functions might be a higher level addition that could result in the same thing (so long sa we can pass bare types, which is part of this change as well).
1. The type should indicate a generic, not a new syntax. This is better because it is easier to parse for the human eye. For example:
func Reverse(first _T, second _T)
2. Since all of what generics are is a compile-time code generator, I suggest that Golang forces developers to explicitly list what types are using the generic:
If I'm understanding correctly, contracts are a way to implement compile time duck typing? If that's the case I think its a pretty neat twist on generics.
One of the things I like most about this is that Ian includes the simple mistake in the first function. It’s a bit humbling and acknowledges the fact that even the best of us often make trivial mistakes.
It also helps make the point that even the simplest code needs a basic test and adds another argument in favor of generics on the premise of duplicating tests.
I feel pretty good reading this. I like that you can write listMyType := List(MyType). I'm still a bit dubious about contracts; they're so similar to interfaces. If there was a way to derive or relate a contract to an interface that would help me.
This process took way too long. I've already moved back to Python3 and am super happy with it. The truth is most software doesn't really need the performance benefit Golang offers and you can code up something in Python in at least half to a third of the time it would take in Golang. Aside from agility, there's one other benefit that imo matters a lot: reduced LoC. If there's one thing I've learned in my career, it's that you can never really have too few lines of code, provided it achieves the same result as a hypothetical more verbose alternative.
I will admit though that I miss Go's CSP implementation and error-handling convention. Maybe I'll come back to it in the future after they add generics and fix the dependency system.
I think another area of Go that could use improvements is a more portable standard library optimised for embedded and OS dev. A lot of people want to use Go as a "systems language" and do drivers etc. in it. But the standard library don't have good support for freestanding, or at least it's not a priority. C and Rust and other languages designed for usage in a non-hosted environment have very clear delineated areas of runtime that requires assembly, manual interfaces for IO/memory allocation etc. Go's runtime by default assumes the presence of an operating system.
"Go's runtime by default assumes the presence of an operating system."
It also assumes the existence of several megabytes of runtime and enough RAM to feasibly use GC, which if nothing else is probably a real mess when it comes to drivers.
What you probably need, rather than being a second-class citizen of the main Go language indefinitely, is a separate dialect that is close to Go, but can go its own way if it makes sense, like: https://tinygo.org/
There is nothing wrong with GC. There exist plenty of network stacks in production that are written entirely in Go (See Docker/Kubernetes et alia). There are also operating systems that have GCs, such as Oberon or the few C# OSes written by MSFT and others. As long as care is taken to isolate hard real-time requirements when needed, a GC is perfectly fine. There is a real advantage to Rust's style of explicitly marking no_std. After all, what's the point of using stuff like tinygo (other than for performance/resource reasons) if it cannot run most of the existing Go libraries?
why are people obsessed with simplicity? there's a quotation my Einstein that I like to keep in mind: "make things as simple as they can be but no simpler". the implication being that if you make things too simple then you actually break things. lack of generics makes for software that's actually more complex than it needs to be. Take for example Swift: lots of complex features (protocols, all manner of functional transformations, generics, enums, structs, iterators, objects with many method qualifiers, etc) but really nice and efficient to program in.
And I had yet to find a single practical application for closures, right up until the moment I started using Ruby (and later, Rust).
Turns out that they’re so useful in practice that they’ve been bolted on to—as far as I can tell—nearly every language in widespread use today. None of these languages “needed” such an improvement. Billions of lines of Java and C# were written without this feature. And yet today it would be virtually unthinkable to release a language without them.
I feel like closures are a consequence of having functions as first class citizens, so it's not exactly a direct comparison. But I guess I see your point.
but what is the benefit? How does your code change. Generics are a tail abstraction. Its the last abstraction that you are able to make to your code to reduce boilerplate. This usually means that is the least useful and in generics specific case the boiler place it reduces is minimal.
Its lack of impact is why I dont really care about the subject. Adding or removing generics to a project matters little. So why bother complaining. The class creation and encapsulation choices are much more impactful to future code edits.
The benefit is that you can remove a lot of duplicate code that does exactly the same, except the type it operates on. The blogpost talks about the Reverse function.
Especially when you need to make changes, (fix bugs, performance improvements, etc), if you have dedicated functions for these, it takes effort to keep them (plus tests) in sync.
If you don't see any issue with this duplication, then I can see that generics don't add a lot.
There is research showing that number of bugs is a function of lines of code. As in, you can be overall better or worse with the implementation, but the more code you write, the more mistakes you'll make. Removing almost-duplicated code helps with this metric, especially when you operate on data structures ripe for off-by-one errors.
its just not that much code usually. I would like to see an example of this "a lot of duplicate code" scenario. Most code sharing is done by super classes. Generics seem more like a syntactic sugar added to type sharing that interfaces already allow.
My experience from scala is that the use of generics is not an afterthought, not a boilerplate abstraction. They’re come in handy naturally when one knows how to use them, and it is not rocket science.
The real benefit of generics. They enable composability. The complete scala collections library is the best example of that. The benefit is non-obvious on a single function level, a complete program is another story.
It’s, of course, possible to to achieve something like this with golang and interface{}. I don’t mind working with it, all the type switches and what not make feel like a proper tinkerer! What is actually happening:, I’m wasting my time, the compiler already knows all the types. It can figure things out for me. Of course, a sound type system is then needed.
The types switches are error prone, it’s like working with java.lang.Object everywhere. And if I can avoid all these “GetX” functions, where X is “String”, “Int”, “Int32”, “Int64” and so on, even better. Just look at viper, it would be so clean and lightweight with generics.
Go strongly discourages you from making extremely useful abstractions over types early on in your design. You don't just write things the same way and remove boilerplate with generics you can exploit more symmetries.
> Generics are a tail abstraction. Its the last abstraction that you are able to make to your code to reduce boilerplate. This usually means that is the least useful and in generics specific case the boiler place it reduces is minimal.
If the std lib had functions that mapped to the "Slice Tricks"[0], like virtually every other language, I'd use them constantly. Sure I wouldn't write generic functions all that much, but calling generic code I would do every day and it would be a serious ergonomic, correctness, and readability improvement to have named functions for very common patterns like the slice tricks.
Go already has generic built-in maps, channels and slices, so clearly those generic types are useful.
If what I actually want is a multiset, say, it would be great to have a standard library class that looks and behaves similarly to map except it’s a multiset, rather than just having to use map and do all the little 1- and 2-line boilerplate tricks for the multiset operations. I can write shorter code that more clearly expresses my intentions, and reduce the risk of stupid little bugs in the boilerplate.
That's because you write a particular kind of program. Notice that there are no good ORMs or numerical libraries. Both of these are use cases for generics. Of course it doesn't help that the go community is convinced that ORMs are evil (which is just posthoc rationalization for being unable to write one).
Developers think ORMs are evil because they've had horrible experiences with ORMs in general. I'm one of them. The only ORM that I enjoyed using was Dapper, and it's barely an ORM.
Dapper really is such a weird edge case. You still write all the sql, all it does is the conversion from a type to the input parameters and the returned rows into objects where the row names match the parameters of the class. Frankly I feel like that's the right way to build ORMs in all languages that have the facilities to manage it (and maybe those that don't you pass in a function pointer to do the dumping of rows into type for you)
What sort of horrible experiences have you actually had with ORMs? Every time I've used one it's been an absolute joy, I hate repeating myself in my DB, in SQL, in mapping code and again in application code. The main thing I see people complaining about is performance edgecases, but for small and mid scale applications these can never take away the massive DRY principle gain good ORMs provide.
There's no denying most C# code makes heavy use of generics, but we've deviated from the actual topic. C# is not Go. Far from it.
I'm not advocating for bringing a Dapper-like library to Go. I'm pretty happy with the solutions we have right now to map data coming from a SQL database into structs in Go. I'm commenting specifically on the argument that Go developers don't like ORMs.
> Of course it doesn't help that the go community is convinced that ORMs are evil (which is just posthoc rationalization for being unable to write one).
Yep. I held this opinion before Go came out, only because I couldn't write an ORM in Go.
But I do still hold that opinion now as a go user. I'm saying I didn't develop it as a result of not being able to write/use an ORM.
Your wholesale dismissal of all go users that don't use ORMs is what I object to. There are a laundry list of reasons to not use them, simplicity being the largest in my eyes. If you're working with simple queries on a small number of tables/views/etc, there's really no reason to bring in a big heavy ORM.
As your usage scales that choice might change, but until then just do the simplest thing that has correct behavior and minimal magic involved.
“I can see the value so those that can’t are inferior.”
You basically just post hoc rationalized their indifference for ORMs by making them sound incompetent
As if there are no other arguments for avoiding them that perhaps seem valuable to people that are not you?
It sounds reasonable to me you conjured the hot take, acknowledged it was post hoc rationalization on your part, but posted anyway, projecting your capacity for after the fact rationalizing onto ORM haters in the Go community.
The point wasn't that Go programmers are too incompetent to implement an ORM, it's that they are actually unable to given the limitations of the language.
I'm pretty sure C++ is used for exactly the game thing Go is used for. And C++ is used for a lot of other things as well. In fact, you will probably have trouble finding a problem that wasn't solved by C++ by somebody. Scientific computing, server process, CRUD applications, embedded applications, AI, graphics, programming languages, libraries, etc.
C++ has a lot more features and therefore it's actually able to be used for a lot more things. People that don't need a language that can be used for every possible computing problem think that maybe C++ is a tad too complicated. Those people certainly have a point.
I don't think it's the feature-set that makes C++ viable for more situations than Go. In my opinion that's only because Go uses managed memory. That takes part of the runtime behavior of the code out of the hands of the programmer. This counts out Go for situations where you need precise control over runtime behaviour.
That said; C++ is a ridiculously complicated language. I don't get why people think that Rust is difficult.
"This article is about what it would mean to add generics to Go, and why I think we should do it. I'll also touch on an update to a possible design for adding generics to Go...."
This is a situation where who is making the proposal is more important than the proposal.
The proposed syntax feels like a caricature to me. That something like this could be easily possible:
func (c Connection) Read(type T Writeable)(into *T) (int, error) {
This is quite securely inside shark-jumping territory.
Also; I haven't seen Rob Pike's name really anywhere in these blog posts or discussions, or on any recent Go blog entry. Is he still involved with the Go project day-to-day? I always got the impression that he was one of the bigger anti-generics voices on the team internally.
As far as I can tell, the latest proposal draft forbids type parameterized methods. One must admit the type parameter to the receiver or define a type parameterized function. I am making no claims about your broader point, only clarifying that your example is not permitted by the proposal draft. See https://go.googlesource.com/proposal/+/4a54a00950b56dd009648...
I do not think generics should be added to Go. It does minimalism and it does it tolerably well. Yet I enjoy using generics in other programming languages. If I had my druthers I'd be in Haskell-land in more than my spare time.
Hence, my modest proposal: Add generics, but make it a brand-new programming language with a totally different name.
Sadly, the name Blub is already taken. I propose we call it Glop. Or perhaps: Gong.
Rather than going through this whole rigmarole of resisting adding generics too early because all existing implementations are bad, then slowly reinventing the wheel from scratch, then finally ending up with something pretty similar to one of those other languages anyway.
It’s OK not to make something completely new and different. It’d be useful to explicitly say which language(s) you’re borrowing from because you can then more clearly call out the differences and explain the reasons for them.