Hacker News new | past | comments | ask | show | jobs | submit login
Expectations for generics in Go 1.18 (groups.google.com)
263 points by smasher164 8 months ago | hide | past | favorite | 200 comments

Thank god, this guy[0] can finally cut over to native generics in his code.

[0]: https://www.reddit.com/r/rust/comments/5penft/parallelizing_...

On the question of "what's this?":

    type ImmutableTreeListᐸElementTᐳ struct
"If you look closely, those aren't angle brackets, they're characters from the Canadian Aboriginal Syllabics block, which are allowed in Go identifiers. From Go's perspective, that's just one long identifier."

Simultaneously amusing and disturbing. Is there an award for which one might nominate this person?

That's the kind of thing you see in a comment and then back slowly out of the room, never breaking eye contact with the code until you are far enough away to turn tail and run.

At the very least they could have used the cute Japanese 「quotation marks」 to avoid confusion.

This is the person you need. Be assured that they were the person willing to log into the live prod server to fix that script that was killing your startups app at 2am on a Friday night in 2007 when everyone else was too gun-shy.

> oh my god

Definitely the correct response.


> c++ allows 0-width spaces in variable names. Where's your god now?

Nowhere. We have strayed maximal distance from god's light and are currently in orbit around Satan.

Bow to the One whose Name cannot be expressed in the Basic Multilingual Plane

> > c++ allows 0-width spaces in variable names.

Is that true though ?

> A valid identifier must begin with a non-digit character (Latin letter, underscore, or Unicode character of class XID_Start) and may contain non-digit characters, digits, and Unicode characters of class XID_Continue in non-initial positions. Identifiers are case-sensitive (lowercase and uppercase letters are distinct), and every character is significant. Every identifier must conform Normalization Form C.

I don't know about the standard, but both GCC and Clang accept ZWSP characters in the middle of variable names.

Clang logs a warning about potentially invisible characters every time you use them. g++ just compiles the code without warning, even with -Wall and -Wpedantic.

What clang _doesn't_ warn for, is the use of the left-to-right override character. This can be used to confuse the victims of your code even more.

> every character is significant. Every identifier must conform Normalization Form C.

This seems to be the only limitation for that to be true.

So the question is does 0-width space (U+200B) survive canonical decomposition, followed by canonical composition?

It feels like it does to me.

I'm dying. This is hilarious.

I suppose there are many examples like this. Once I've copied a piece of code from SO but IntellijIDEA was warning me that the semicolon at the end of the line is not correct.

Upon further investigation(I think IntelljIDEA reported Unicode value) I've found out that the 'semicolon' I copied is a Greek character that looks exactly like semicolon.

I wonder how people on vim/emacs deal with situation like this.

It's similar when online editors try to be clever and replace a straight " with a fancy one (like in Word), really annoying.

That's why Powershell actually recognizes those fancy quotes and treats those as normal quotes.

So one Microsoft product contains an automatic workaround for an issue created by another Microsoft product.

On VSCode, I use the Gremlins extension which highlight all those suspicious characters.


Pressing ga in command mode in Vim displays the ASCII / Unicode value of the character under the cursor.

Yes, but IntelljIDEA's Linter showed me the error. Without linter you'll need compiler/interpreter to tell you something is not right.

And in my case, even if compiler / interpreter reported about an invalid syntax - I would not think of checking the unicode codepoint as it the character appeared to look exactly like semicolon.

The rust compiler warns when it encounters confusable symbols like that. Not sure about other languages.

vim user here. Most of us are running the same linters and language servers as every other major editor, which does a pretty good job catching stuff like this. Modern vim is really only a C core with a web of JS, Python, Lua, and vimscript using the exact same third party solutions for every other part of an "IDE". emacs is probably similar, but no personal experience with that.

Java Certified Professional

Canadians now roll with the term Indigenous instead of Aboriginal

That may be, but in this case it is a precise technical term, referring to a particular block of Unicode: https://en.m.wikipedia.org/wiki/Unified_Canadian_Aboriginal_...

Maybe the Unicode Consortium should change it, but for now, Canadian Aboriginal is the correct term.

In 10 years a new word may become fashionable. Better to keep technical terms the same.

It's not about fashion, it's about cultural identity. I take it you don't see a problem with master/slave either?

Just do a find and replace, refactoring is easy and trying to argue on the internet about it is tiring.

I take it you didn’t grow up in the 80s. Trends in words change: you’ll see it with your own eyes.

> trying to argue on the internet about it is tiring.

… exactly.

If identity is about appearance, that would seem to admit fashion for consideration.

Colloquially in the north "native" is ubiquitous.

For now...

I love the reply: “c++ allows zero width spaces in variable names. where’s your god now?”

Fixed in C++23 by adoption of P1949R7 [0], which didn't originally set out to fix that, but seems to have after a Netherlands national body ballot comment.

[0] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p194...

I once read a comment like "Gives a new meaning to the term native code".

I'm actually curious if Go would normalize the characters AND if it had used angle bracketd, and make those equivalent to angle brackets? Would he then not have to even change his code?

I guess that's a lot of ifs/ands but it's interesting that one could have future proofed their code if their bet on syntax paid off

They're semantically different characters and Go supports multilingual identifiers, so I would not expect normalization to impact this. At most I would expect normalization to simply deal with ordering combining marks and standardizing on composed or decomposed forms of characters (where applicable.)

This got me curious what I'd read about domain names and visual normalization of Unicode, and found this: http://www.unicode.org/Public/security/revision-05/confusabl...

It might be useful for editors and compilers to check for tricky Unicode (lookalikes for common characters, atypical changes in direction, invisible spaces or other formatting control codes, etc.) in this era of copy/paste coding.

I believe Rust will warn you if your non-ASCII identifiers are potentially confusing based on this list.

It's funny how codegen has become widespread in Golang to work around its deficiency.

Codegen is a super flexible and versatile tool, and simply having a strong generics feature does not mean you can avoid it: at $DAYJOB we use codegen even for C++ for e.g. grpc/openapi types, generating rapidjson streaming unmarshallers, etc not even counting Qt's moc/uic.

I understand and agree, I've run many a Java project that used codegen to emit Protobuf/Avro IDLs as types.

But Go having actual generics, and accordingly removing some need for codegen, is not a bad thing at all.

No, I do believe Go will be improved if it allows library maintainers to implement some things like new collection types and utilities using generics.

I for one won't be using more advanced functional coding though, it's a lot more difficult to read / parse and the language is not designed with functional programming in mind. I'm sure people are already working on a functional programming library so you can do map / reduce / filter and the like, but readability and performance will be dreadful.

Strongly agree. Some people like to be on this endless treadmill of cutting edge features, like in this case parametric polymorphism but I refuse to be dragged in. They just make the code more complex with no benefit.

Next thing you know they will try to force me into structured programming (I'm sure there are people already working on do...while). They can pry goto from my dead cold hands.

Look, I'd just kill for a generic Set, tbh.

I've come to prefer code quantity over code complexity. I mean, what the fuck is

    def traverseImpl[F[_], A, B](fa: Option[A])(f: A => F[B])(implicit F: Applicative[F]) =
      fa map (a => F.map(f(a))(Some(_): Option[B])) getOrElse F.point(None)

This is probably less readable than it has to be due to the lack of whitespace. I don't know the language (Scala?), but I don't think it's too hard to work it out. First, the '[F[_], A, B]' part are type parameters, where I assume the 'F[_]' part is just a bad notation for a type of kind '* -> *' (i.e. a higher-ranked type). The next value parameter has type 'Option[A]', and after that we have 'A => F[B]'. This confirms my suspicion that 'F' is a higher-ranked type - and probably a functor or something. The 'f' parameter is a function for converting a value of type 'A' into a value of type 'F[b]', which looks like it's tranforming a value and putting it into a functor or something. The final '(implicit F: Applicative[F])' hints at deep problems in the language, but I think it basically just says something like 'the higher-rank type F must implement the trait/typeclass/whatever Applicative'.

That's the type. The expression makes little sense to me, but I suspect you removed a bunch of dots and maybe some parens, and it should actually look like this:

          fa.map (a => F.map(f(a))(Some(_): Option[B])).getOrElse(F.point(None))
So this function essentially just operates on an option value inside of some functorial context. This seems like a building block function that you might use often, but probably not write very frequently.

(If you didn't remove anything, then I suppose this language has infix 'map' and 'getOrElse' operators. I think that's bad, but it's not related to the type system.)

scala has infix everything ‘operators’. f(x) can be written as f x and f.g(x) can be written as f g x if the parser manages to figure out what it means (I’m sure they phrase that differently)

Idea, I think, is to make the repl look more like a CLI, but it also may be a fight against punctuation.

Its only hard to read because you don't know the language, otherwise, its fairly straightforward. Although the Applicative type would require some research. I would much rather use a language like this, that can create enough abstraction to enforce consistency throughout the codebase, rather than relying on boilerplate conventions. Just had a bug this week in Go where a previously defined "err" variable was re-used instead of creating yet another "err" variable, which caused the actual error never to get checked.

This function is certainly hairy, but its use in the codebase might very well be intuitive and simplify code that would otherwise be way less expressive. I personally think it can be worth it.

Transform fa using f, collecting the results as an option

What is funny about it? People have worked around deficiency of systems since forever in almost every field. Is it something you have never seen in real life?

C++ and Java also used codegen before they got templates and generics respectively, but since Go's matra is to relearn history, they had to have a go at it.

S̷͕͂̒c̴͉͈̆ŕ̶̘͚̄e̸̘̽̌ͅa̵̼̳͌m̷̬̓̈́͜s̴͙͆ ̸̨͓̾̈́i̴͈̼͐ņ̶̟̊̍ ̵͍̐̊Ű̸̪̤n̴̪̝̅̍i̶̜͌c̷͎̩̑̀ỏ̶̜͊d̴̟̎͠e̴̘̿

Oh gosh oh gosh zaglo variables please no


The syntax the OP gave reference to has nothing to do with how generics in Go 1.18 will look like.

If you are interested in generics in Go 1.18 please read https://go.googlesource.com/proposal/+/refs/heads/master/des...

Oh my god.

Very reasonable caveats for a feature as big as generics.

Personally, I'm super excited for the potential generics has to make error handling in Go less noisy, so I'll be attempting to use it ASAP.

Isn't the "noise problem" with Go error handling the control flow rather than the lack of generics? You want to return early (and potentially add context to the error) if there was an error, otherwise continue on. How are you thinking of solving this with generics?

The previous try() proposal added new control flow. That's part of the reason it was criticized so heavily -- it hid a control flow construct in something that looked like a function.

"Isn't the "noise problem" with Go error handling the control flow rather than the lack of generics?"

It's both. Adding only concise flow control, such as with the ? operator, works poorly if all you can return is multiple non-generic values; various common permutations get awkward in that case. And as you point out Result<> doesn't achieve much without the concise operators. Rust has very successfully shown the benefit of the combination.

I wrote a comment here pushing back somewhat on the idea that Rust has much better error handling ergonomics (not correctness, where it clearly wins, but ergonomics) but thought better of it; debating Go vs. Rust is about the least productive thing we can do on HN.

I like what Chris Lattner said about language design in Lex Fridman podcast, that you know when you're doing something right because it "feels" right. Rust has a bunch of syntactic sugar which gets critised in many languages, but to me things like error returns with ? "feels" right. Love Go as well, and the error handling makes a lot of sense in concept but it just doesn't "feel" nice to look at or use.

I like there's a language that has such a strong vision, but dealing in absolutes has drawbacks, a little sytactic sugar would make error handling a lot nicer.

eh having used both, rust's approach isn't any better. when you need to decorate an error, which generally is a lot of the time, rust has the same ergonomics as golang. they optimized code for a generally rare situation.

I do appreciate rusts match syntax though.

For what it's worth I really like how go handles errors.

I’m with you in this one, Go’s error handling was intuitive to me. It was my first compiled and typed language, though. That might bias my experience.

> that you know when you're doing something right because it "feels" right

Absolutely false. When you write the really groundbreaking and important software "feels right" is absolutely not what you should be experiencing.

The correct mode you should be in is "holy fuck how did I get here, and can I come out in one piece"?

(A: no, you can't, but the new you will be a better you.)

If you don’t have a result type then you don’t have better error handling!

Zig disagrees. Monads are cool and all, but we can do better than Result for error handling.

Result is monadic but not necessarily a monad.

From what I know of zig’s error handling, I really don’t think it’s better than result types.

It is less work, especially when using error unions (though I think that makes it easier to miss changes to error sets as they’re implicit), but since errors are literally just u16 they’re also deeply lacking in richness and composability.

It’s definitely an improvement over C’s error style, and thus perfectly understandable given zig’s goal, but I very much disagree that it’s “better than Result”.

It’s also extremely magical in that it needs special language-level support throughout in ways Result is not, but obviously that’s more of a matter of taste.

> It’s also extremely magical in that it needs special language-level support throughout in ways Result is not, but obviously that’s more of a matter of taste.

That's a peculiar observation, it's like saying Rust's borrow checker is magical because it needs language-level support. I mean, that's the point.

In any case re: zig error or rust's Result, I don't think either of us are discussing features and pros/cons, but just describing what we like the best. Not very objective an argument.

> but since errors are literally just u16 they’re also deeply lacking in richness and composability

Zig errors are composable: https://ziglang.org/documentation/master/#Merging-Error-Sets and later sections.

> That's a peculiar observation, it's like saying Rust's borrow checker is magical because it needs language-level support. I mean, that's the point.

You can’t do borrow checking without language support (I think, at least not without a significantly more expressive type system) whereas Result is pretty much just a type, the features it uses are mostly non-exclusive, and those which are, are very much intended not to remain so (e.g. the `Try` trait for the `?` operator).

> Zig errors are composable: https://ziglang.org/documentation/master/#Merging-Error-Sets and later sections.

Not really? That’s just merging error sets, but you can’t say that you’ve got an error which originally comes from an other error, except by creating its dual and documenting it as such. Essentially your choices are full transparency or full type erasure.

> Not really? That’s just merging error sets, but you can’t say that you’ve got an error which originally comes from an other error, except by creating its dual and documenting it as such. Essentially your choices are full transparency or full type erasure.

What you just described are error traces. They're generated by Zig auomatically for you, thanks also to the fact that errors are a special construct in the language.

Relevant langref passage:


Don't you need sum types/sufficiently powerful enums for Result though? Which, in a way, makes it a language feature.

You’re confusing the chicken for the egg.

Sum types are a powerful language feature, but they’re also a general language feature, they don’t exist for the sole purpose of creating a Result type (and indeed a number of languages have had the former and lacked the latter).

I still don't get what's wrong with having custom syntax or language facilities to support a feature. Why is that a problem?

Do you mean that in contrast to affine types, that are here for borrow checking?

No, I mean that in contrast to the other side of the discussion (zig’s errors).

And affine types are not here for borrow checking, it’s closer to the opposite. Affine types are a formalisation of RAII, borrow checking is a way to make references memory-safe without runtime overhead which mostly avoids unnecessary (and sometimes impossible requiring allocations & refcounting) copies of affine types.

AFAIR, Zig has an open issue on how to pass additional context with errors, such as e.g. row & col where a syntax error happened.

To clarify I thin parent means syntax error I'm the sense of "I'm parsing a (code) file at runtime and there's a syntax error", not a language syntax error.

I mean there's nothing stopping you in zig from creating a rich "error" struct and returning a union of the struct with whatever you would have done otherwise. You only lose the error return trace.

"I mean there's nothing stopping you in zig from creating a rich "error" struct"

Thus Rust's std::result::Result<T, E> where the E in Err(E) is anything that implements the std::result::Error trait.

Rust got this right. It satisfies most use cases with the least possible noise and accommodates the weird ones easily, and does it in a std:: codified manner where no one has to wonder whether or not there is anything stopping them.

Have you seen what a "rich error struct union" looks like in zig? It's very easy and very legible code. I'd further bet that there are no cases when you're doing this that you want an error return trace. And the downstream handling code is very easy.

If there is a problem with how it's done in zig, it's a community one: maybe the pattern needs a megaphone beside it, like examples in the main docs or tuts in the various zig tut sites that have cropped up

"Have you seen what a "rich error struct union" looks like in zig?"

No, and I'm not arguing that zig hasn't done well. I do argue that Go has not; you're only non-weird choice is to fetter code with miles of "if err ==/!=" gymnastics.

Traces are frequently very useful; precise traces have saved me a lot of pain in life. Lack of traces precludes nothing for me, but I have enjoyed the benefit of them enough times to acknowledge their great value.

Traces are fantastic for unexpected errors. You don't need them, for example, when validating user input, which is the type of system where you want richer errors.

(E is not required to implement that trait, it just often does)

Early return is definitely a part of the problem. One thing you could do with generics is create option/result wrappers that would work with any code and hack together a try/early return solution using panic/recover, but you'd offend the Go community, and at that point you might as well just use Rust.

> How are you thinking of solving this with generics?

People can now create Try or Option monads instead of returning the error.

This was the one of the reasons anti generics people were anti generics

But that will make error handling more verbose.

Speaking as a user of languages that have exactly this, it can’t possibly be any worse than the situation in Go right now.

Other languages have an additional zoo of language syntax besides generics to make monadic error handling bearable, which Go doesn't have.

It'll make it unmanageable by hiding complexity.

But more verbose? I'm not so sure.

Passing a single anonymous function is already more characters to type than if err != nil {}.

Will it?

I can't see it myself, but happy to educate myself.

Right, I suppose I forgot that a big reason `Result` works well in Rust is the `?` operator.

At the very least, I'll appreciate having some more sugar over loops that are essentially just map/filter.

The ? operator wasn't in Rust 1.0

Before Rust 1.12, Rust has a macro, named try! which does a similar thing to its argument although the built-in operator has some fancier features.

It is both, because multiple return values break function chaining.

Was the try() proposal for Go error handling? I'm really curious what happened to the proposals for shorter Go error handling, rather than if err!=nil {...}

> Was the try() proposal for Go error handling?

Yes, it was the second proposal by the Go team for shorter error handling: https://github.com/golang/proposal/blob/master/design/32437-...

> I'm really curious what happened to the proposals for shorter Go error handling

The community didn't like it and it was declined: https://github.com/golang/go/issues/32437#issuecomment-51203...

Generics took several iterations from the Go team to finally land on the current (pretty good, imo) design. I might expect error handling to also require multiple tries before it sticks. And we should probably wait for the dust to settle on generics a bit before proposing another design for error handling.

I want the "if err != nil" back!

I don't see how it changes error handling. To implement a generic error handling function you would need a non-local return statement (panicing is not error handling). And a generic monadic Result type would make error handling more verbose due to its reliance on closures for subsequent actions.

This is why Java went with type erasure...

Related discussion from 10 days ago:

go: don't change the libraries in 1.18



If you want to get a head start, gotip makes it easy to build and run go from the development tree:


Had a long running bet that NCIS would be cancelled before Go got generics. Now it looks like I must eat my words.

eh, Mark Harmon left before Go got generics.

hmm. So, a tie?

Does anyone know the impetus for using [] in generics? I know Nim opted for the same syntax and I assume it has some parsing/tokenizing benefit like the switch to using “fn” a la Zig or Rust. However, the syntax seems needlessly ambiguous with array notation sharing the same operator. I know I’m not alone because I’ve seen others mirror my concerns on Nim forums [0].

[0] https://github.com/nim-lang/Nim/issues/3502

The original implementation used ( .. ), which I found an unreadable mess of parentheses soup, but luckily that was changed. The problem with < .. > is that it's ambiguous with the greater-than and lesser-than operators. From [1] (among many other discussions on this):

For ambiguities with angle brackets consider the assignment

a, b = w < x, y > (z)

Without type information, it is impossible to decide whether the right-hand side of the assignment is a pair of expressions:

(w < x), (y > (z))

or whether it is a generic function invocation that returns two result values:

(w<x, y>)(z)

In Go, type information is not available at compile time. For instance, in this case, any of the identifiers may be declared in another file that has not even been parsed yet.

[1]: https://groups.google.com/g/golang-nuts/c/7t-Q2vt60J8

Why not {} That should be pretty easily parsable, and seems less ambiguous than [] (also Julia at least is prior art)

You can't get more ambiguous to parse than {} in the context of Go.

> also Julia at least is prior art

Julia doesn't use curly braces for blocks and structs.

Why not use characters from the Canadian Aboriginal Syllabics block?

ᐸ & ᐳ

Are we going to ask this for every Unicode character pair? They have answered all these questions in the design document: https://go.googlesource.com/proposal/+/refs/heads/master/des...

They thought about generics for 12 years. It's unlikely that someone who thought about it for 3 minutes on Hacker News has some epiphany that wasn't taken into consideration.

I think this is a joke from another thread: https://www.reddit.com/r/rust/comments/5penft/parallelizing_....

Also, I think it is fair to challenge the decisions here seeing as how there have been a few things chosen in Go's history that ignores prior art from other programming languages.

It’s a reference to a cursed Reddit post, at the top of this page.

made my day

> In Go, type information is not available at compile time.

Not a compiler expert, but I got confused when reading this, when is it available?

I think they meant “type information is not available at parse time” (rather than “compile time”).

There is a longer and more precise description of the problem as part of this FAQ of the generics proposal:


…which includes an example and the statement “It is a key design decision of Go that parsing be possible without type information”.

Most (if not all) languages are compiled or interpreted in two stages: first the code is parsed (a.k.a. lexed, tokenized) to an internal representation (usually called "Abstract Syntax Tree", or AST), which just means translating something like ">" to "GREATER_THAN_SYMBOL", "5" to NUMBER, etc. and storing this in some data structure. This is also where you usually detect syntax errors like forgetting to use " to close a string and such.

In Go, you can do this fairly easily with the go/ast package.

Then you take the AST and do something with it. That something can be compiling the code, but also writing some sort of tooling like a linter, or identifier rename tool, or generate documentation from it, or whatnot. When compiling the code you need the type information, for a lot of other purposes you don't really care.

It's pretty valuable to keep the parsing as simple as possible; it makes it easier to detect errors, improves the quality of the error messages, and makes it easier to write tooling. It also keeps the code a lot simpler, easier to understand and modify, etc.

> Most (if not all) languages are compiled or interpreted in two stages: first the code is parsed (a.k.a. lexed, tokenized) to an internal representation (usually called "Abstract Syntax Tree", or AST)

Lexing/tokenization doesn't produce an AST; it produces a token stream. Parsing (which may or may not be preceded by lexing) produces an AST.

Let's face it, <> is just ugly. whereas [] is beautiful :)

Rust used [] for generics a very, very long time ago. A lot of people wish we still did. I personally am glad we do not, but that's only in the context of Rust. I think the choice the Go folks made is very reasonable.

> Does anyone know the impetus for using [] in generics?

Yes: https://go.googlesource.com/proposal/+/refs/heads/master/des...

is it really the case that parsers can't tell the greater than sign in "a, b = w < x, y > (z)" without type information? or is it simply difficult to parse for some parsers?

It's a problem all languages with <..> need to solve one way or the other; see e.g. https://blog.dyvil.org/syntax/2016/04/19/angle-bracket-gener...

So for C# it's resolved "by examining the token after the closing >: If it is one of (, ), ], :, ;, ,, ., ?, == or !=, the expression is parsed as a generic method call."

I assume this works reasonably well; however, it's not hard to see how this might be a problem in some cases where the parser gets it wrong.

thanks for explaining :)

The doc says it's actually impossible, and I'm inclined to believe so. Rust has a similar problem in their case they adopted the "turbo fish" operator, where the generic function call would take the following form:

    let (a,b) = w::<x,y>(z);

Can you as a human? If you can't do it a parser can't do it either.

Eiffel is the first language I know of to use [] for generics. It's consistent with the interpretation of [] as "subscripting"; the generic is a family parameterized by types, and the subscript selects one element of the family.

What’s even more consistent is using juxtaposition to represent all parameterization, on the value and type levels. Haskell does this!

That errs to the side of too much consistency for me. I'd like a nice balance; a small amount of syntax... but not too small. After all, morally x.a, x[a], and x(a) are all the same thing.

I don't know the answer to that but Scala is also a precedent for using square brackets for generics (AKA type parameters). However in Scala arrays don't use square brackets so that confusion is avoided.

And Python and CLU, Barbara Liskov's OG generics language.

Maybe it would feel better if you thought of List as the set of all types that are List, and List[Thing] as extracting a single item from that set?

It makes sense, you are indexing the function with concrete types to select your implementation. This is consistent with slices and maps.

> Because we will not know what the best practices are for using generics,

Well, you could... take a look at languages created these past twenty years that support generics, learn lessons from that, and see how these lessons could apply to Go.

But well, the Go team is not exactly known for paying much attention to the state of programming language theory, so I guess that's out.

Given that every language is different, this is basically reasoning by analogy. Sometimes analogies are useful for explaining things, but they're not a reliable way of determining what will definitely work.

Reasoning by analogy in a sneering way doesn't make it work better.

Programming languages differ, yes, but not so much that they face completely different design decisions. Also, it's not "reasoning by analogy", it's learning from others.

In fact, why stop at generics? Why not go back and question structured programming? Or strong typing? Because structured programming and strong typing has proven to be extremely useful. Same as generics in statically typed languages.

Maybe reasoning by analogy and learning from others are about the same thing?

How much you should rely on others' experience, and when you should consider it a positive example versus something to learn from by avoiding, is going to be a judgement call.

Trivia: parametric polymorphism (aka generics) was first done in ML, 46 years ago. WP cites this reference:

Milner, R., Morris, L., Newey, M. "A Logic for Computable Functions with reflexive and polymorphic types", Proc. Conference on Proving and Improving Programs, Arc-et-Senans (1975)

(And of coures we've been able to write generic code in dynamic languages for a long time as this is a static typing concept)

I think Go is all the better for not rashly copying every idea from every programming language. The delay on generics is a small price to pay (if indeed a “price” at all) to have such a simple, ruggedly utilitarian language which also happens to be an absolute joy to use.

You can do that, but that's still not as good as direct experience with Go generics in Go code. Those wouldn't be best practices, they would be guidelines.

It does seem strange. This is a problem that has been long solved elsewhere. I don't get the fuss over it.

Go is Special and not comparable to these lesser languages.

How are they implemented?

Compile time? With template expansion like D and C++?

GC Shape Stenciling, a hybrid of stenciling and dictionaries: https://go.googlesource.com/proposal/+/refs/heads/master/des...

How does that compare to other approaches for generics? I know that you can do monomorphisation, which usually increases the executable size and compilation time, and usually gives faster code, but I don't know about other approcahes.

The two approaches are stenciling (what you call monomorphisation) and dictionaries. Stenciling/monomorphization increases binary size, compilation time and TLB/cache pressure. The dictionaries-only approach has more runtime cost. This hybrid is a middle ground of both.

So there's a runtime cost for using generics?

The linked article is a proposal but at least when I tried Go a couple of weeks ago, it did not use that approach and to the best of my knowledge that proposal has not yet been implemented. Instead as of now Go uses the straight forward approach of generating one implementation for every instantiation just like C++ compilers do.

The upside is there is no runtime cost to using generics, the downside is there is significant compile time and link time cost to generics.

The hybrid approach is absolutely implemented: https://github.com/golang/go/blob/master/src/cmd/compile/int...

> the downside is there is significant compile time and link time cost to generics.

Are we talking statistically significant or productivity significant compile time cost? Go has a fairly fast compiler, it would be a shame if generics made that a less stand-out feature.

Yes, the same cost as calling a method through an interface. IIUC, the concrete implementation can change.

No, it does not have the same cost as calling a method through an interface.

Yes, which I find unfortunate but I can respect that approach as it's likely the simplest way of implementing it and Go does want to avoid heavy abstraction in its type system.

The downside is you can't do something like have a function pointer to the generic itself or specify generic members of a dyn trait/interface or otherwise treat the generic as a first class value (although you can treat specific instantiations of it as first class). The generic is basically syntactic sugar for a macro like substitution system. C++ and other languages that take the monomorphization route all have this restriction, whereas languages with more "first-class" support for generics like C# don't and hence do allow the moral equivalent of a virtual template.

C# mostly gets away with it because it has JIT support for it; in the end you still need to generate code for every data-size it's used with. Obviously, if all data are passed by reference you'd only need to generate code for the size of the pointer.

Even in C#, generics are sort-of second class:

    interface IMyLovelyADT { R Accept<R>(IMyLovelyADTVisitor<R> visitor); }
is not isomorphic to

    Func<IMyLovelyADTVisitor<R>, R>
The interface is strictly more flexible because it's non-generic, see my comment at [0] for an actual code example (it's quite long) if you're interested, but the gist is "you need a non-generic wrapper interface IMyLovelyADT: it essentially does the same job as Haskell's forall, since generic types are not quite proper types in C#'s type system. Interestingly enough, upcoming Go 2's generics will not support this scenario: a method inside an interface will be able to use only the generic parameters that the interface itself declares."

[0] https://blog.ploeh.dk/2021/08/03/the-tennis-kata-revisited/

> Interestingly enough, upcoming Go 2's generics

It will be Go 1.18, not Go 2.

They've been discussed as one of the several "feature proposals for Go 2.0" for couple of years, the playground for them is called "https://go2goplay.golang.org/", the issues with them go into "cmd/go2go" tag on Github, so that's why I called them "Go 2's generics".

And weren't they supposed to be in Go 1.17 but then got delayed?

From the outside, it seems like adding generics to Go seems similarly controversial and slow as the Python jump to version 3 — although Go generics haven’t even landed yet.

Generics have the potential to impact a decent amount of the standard library. Although I’m not surprised that they are not doing a major version bump given the isolation of the feature at this stage, it will be interesting to observe its uptake in the community and whether fragmentation occurs over what is supported and what isn’t given their recommendation of isolating generics-related code in third party libraries.

I don't think it's anything like the Python jump from 2 to 3. That changed the behavior of a lot of code (unicode vs bytes, various syntax changes, import changes, etc), and there was no way to automatically upgrade a codebase (2to3 didn't really work).

With the addition of generics to Go, they've been careful to ensure it's fully backwards compatible, so all existing code will still work as is. Major version bumps are for incompatible changes (like Python 2.x to 3.x).

What they're recommending with isolating generic code in existing libraries is so users of older versions of Go (1.17 and prior) can still use those libraries, just not any new functions/types the libraries have added that use generics, and hence require Go 1.18.

Yep, and in particular, the version 1 moniker in Go has a precise meaning that the Go team committed to at 1.0:


The fact that they can release generics and NOT bump to 2.0 is a triumph of the language designers and implementers, a testament to how much effort has been put into keeping this commitment.

> We expect that some package authors will be eager to adopt generics. If you are updating your package to use generics, please consider isolating the new generic API into its own file, build-tagged for Go 1.18 (//go:build go1.18), so that Go 1.17 users can keep building and using the non-generic parts.

I have a feeling this won't happen.

You're right. I maintain a very large code base in a private repo and a successful open source project. Generics will allow me to clean up a lot of code duplication internally, but given that Go modules are distributed as source code, this would force everyone to use Go 1.18.

My plan is to not use generics for quite a while, as it's pointless to have two separate implementations, one for 1.18 and one for < 1.18, which is difficult, because I really like the feature.

Upgrading to the latest release of the Go toolchains is generally a painless experience; the Go team has a good track record of enforcing backward compatibility. Hopefully this means that you can assume other users will be able to use code targeting new features of go 1.18 in pretty much the same time window it takes for other releases to supplant old ones, and make this feature not unlike previous (minor) language features that nevertheless would break users who cling to too old toolchains.

I hope that the publicity caused by generics doesn't taint this release causing people to unnecessary procrastinate a toolchains update they would have otherwise done if it wasn't for generics.

Sometimes, people can't be on the latest version of software due to various compliance and regulatory issues, and updating anything to a new version requires some kind of re-certification or new auditing. So, when supporting such users, you need to work carefully on balancing their inability to update, versus yours to move ahead.

Hacking on some projects or experiments is one thing, but say you're providing code for the automotive industry or the payment card industry; you're in for a world of regulatory hurt.

Well for the users, they should be upgrading anyway. I can't see a situation where someone would mirror your library in and not also have brought, or be bringing in a non breaking language change in the new golang. The only change management advice I could give, that you'll probably know, would be to roll a month slower than golang.

And so Go is one step closer to being a real programming language in many people's eyes.

It seems eminently foolish to think Go isn’t a “real language” for lack of generics—why should we concern ourselves about fools’ opinions?

Saying this about one of the fastest growing languages on the scene right now feels… weird?

I would say, many people also look down on much more popular languages, such as PHP. Go is still an absolute niche language, even more niche than languages like Scala or R.

Niche in your niche perhaps, but there's a lot of code in the wild written in Go. Hell, all the modern devops tools are written in Go (Docker, k8s & co.)

Meanwhile, as a non Java programmer, I've never seen a Scala app. I know about R, but nothing on my PC is written in it.

Agreed that PHP is massive in comparison to all of them.

Sure, devops tools are usually more fundamental, whereas high level languages are used for things closer to the business. Twitter's backend for example is mainly powered by Scala, but you probably don't know about it, because it is not as visible to you.

Enumeration isn’t a compelling way to indicate relative frequency (in this case, frequency of usage of Go and Scala, respectively). An enumeration of size=1 is even less convincing.

That was not my intention. The example was supposed to show that not all software has the same visibility to software-developers.

> Meanwhile, as a non Java programmer, I've never seen a Scala app.

One widely known Scala application is Kafka. Some parts of it are in Java, but actual core is still mainly Scala afaik.

Yep, Kafka is probably the most popular one. I've heard that the team is looking forward to replace the remaining Scala code in the project with Java once pattern matching and co. land. Spark is another beast that was written in Scala. Many are reporting a high cost for compatibility. Nowadays, Scala community is all about Typelevel and ZIO, if you are not a category theory minded person, then you will have a hard time picking it up.

Zio's library is not based on category theory. In fact, its trying to be the opposite.

> I know about R, but nothing on my PC is written in it.

You could say the same thing about excel macros. They are both tools for wrangling data, tools that will typically not be distributed to end users.

For the love of God please let throwable and catchable exceptions be next

Hopefully not and never.

They already have it with panic and recover. They just don't recommend using it for error handling.

Even if this behavior is similar to how Java exceptions work, I love the way go packages are written.

Explicit error handling can be tedious, but I’m never confused about what some code will do.

Compared to Java where an exception may bubble up 10 layers to a catchall try catch statement, I actually know where the error is coming from.

Would be nice to have this abstracted away with tooling though. I would imagine go generate could be used to great effect in this arena.

Go errors are actually less useful in this case since they do not capture backtraces by default (if I remember well), and invite gratuitous re-wrapping. Most times I had to troubleshoot something it was useful to know the lowest level where the error happened, not the arbitrary level the application/library author decided to be appropriate and missed.

Agreed, there are tradeoffs, but at least Go’s approach has more potential—there’s nothing stopping Go from capturing stack traces and returning them on the error object. On the other hand, exceptions always have the confusing control flow problem.

I'm just stating a fact, it's slightly unnecessary to reply with your declaration of love for Go.

I prefer returning errors as a part of the function result myself, as you would in any decent language encouraging functional programming principles.

If anything, it shows that including a feature in a language (exceptions) will not lead to people misusing it if that misuse is not encouraged.

...because panic and recovery is a whole different plane of existence. Recovery feature is a needed evil here, needless for a simple error handling.

Hopefully they will never introduce commonplace exceptions.

Please dont.


Sigh. Go without generics was useful. We built useful things with it.

Go with generics will continue to be useful, and it'll make it easier to build some things that were irritating to build earlier.

The design of generics went though so many rounds of iteration and discussions to make sure it wasn't a detriment to the language - indeed you can build everything that Go has already been used to build without ever knowing about or using generics.

Generics coming into a 1.x release also means that any code written under 1.x is expected to continue working perfectly, generics or not.

The process has made sure no damage will be done, and we now have a new toolkit to make solving some problems easier. I don't think anyone is trying to gaslight anyone by using generics.

That's an expected pivot. From "nobody needs generics" to "we absolutely needed 10 years to figure out the best way to do parametric polymorphism, which was already known for 35 years when the language was created".

And of course in addition to that, there will probably be claims that "no one claimed that generics aren't needed". I'm glad that we have the archives to show that this isn't true, and it's a good thing to keep in mind when opinions are being pushed on HN.

Often the people that expressed that opinion are different people then for example the person you responded to here.

Please remember that this is a forum of a very big group of participants with varying points of view and resulting opinions.

Even if he was previously a person that said golang doesnt need generics, I struggle with your point of view. It seems you're of the opinion that nobody should be allowed to... change their opinion? Or at least has to be continuously told that they're dumb because they held an opinion that turned out to be incorrect? We're all people and make mistakes/have incorrect assumptions about topics, why not just accept that and move on?

You might want to refocus on what kind of person you want to be, because both of your comments come off as very hostile

You're arguing for charitable interpretations of statements by people who claimed Go didn't need generics or even that Go was better without them, saying that one should be able to change ones opinion without being called dumb. I fully agree.

Similarly, a charitable interpretation of what kubb wrote would be that they are referring to those who might have been dishonest in their defense of Go's lack of generics, which one might say kubb does indicate by using words like apologetes and zealots. The Internet is full of people who pick a team and will say dishonest things in perceived defense of it. I agree with kubb in this regard. That is the charitable interpretation of what kubb wrote, but instead you assumed kubb referred to everyone who ever voiced that opinion and suggested kubb should refocus on what kind of person they want to be.

The Internet needs more of charitable interpretations, and HN in particular. Perhaps I failed to interpret you charitably now? Nuances get lost easily in online debates... :)

I sort of agree. The most charitable interpretation is that kubb is “nutpicking”—addressing the least articulate and worst arguments of a community. But he would do us all a favor to acknowledge explicitly the boundaries of his criticism. For example, I hold the position that generics aren’t necessary, but that they will make some code more clear and a lot of other code less clear (and this has long been my position)—does kubb’s criticism apply to positions like mine? Am I his “zealot”?

Moreover, using terms like “zealot” to refer to people with whom one disagrees is very likely to inflame the thread (as indeed it already has, to a degree), whatever kubb’s intention.

The term "gaslighting" has become overwrought to the point where it's hard not to immediately dismiss anyone using it.

It is not a "pivot" for someone to say, "X isn't necessary", and then say, "X still isn't necessary but at least it's implementation doesn't break things".

There's no gaslighting. The Go team position was always something like "We didn't find a good way to include generics in the language, but if we do we may add them". That's a reasonable position.

However some people, which are referred here as "apologetes and zealots", claimed that you don't need generics and that they were a bad thing, which is a very different thing and not the opinion of the Go team.

The conclusion here is that these people were wrong, just like the ones that said that it was impossible to do anything serious without generics. While those people were busy arguing with other people on the net, the Go team and community were slowly but surely refining their ideas on generics and how to make them fit into Go.

I would say that Go without generics has been and is still useful for a whole lot of us.

This attitude/mindset seems so petty and disappointing. Feels to me like gotcha journalism or something.

> This attitude/mindset seems so petty and disappointing.

True, but not as disappointing as the attitude of the go language creators and maintainers.

> attitude of the go language creators and maintainers

I'm pretty sure none of the Go language creators or maintainers has a negative attitude towards generics.

Who's "they"?

Surely you don't mean the Go team, which has had this in the language FAQ since 2009 [1]:

Why does Go not have generic types?

Generics may well come at some point. We don't feel an urgency for them, although we understand some programmers do.

Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it. Meanwhile, Go's built-in maps and slices, plus the ability to use the empty interface to construct containers (with explicit unboxing) mean in many cases it is possible to write code that does what generics would enable, if less smoothly.

[1] https://github.com/golang/go/commit/dd64f86e0874804d0ec5b713...

You seem like those who remain obsessed with technology used in software than technology produced by software and can't see anything beyond it.

Finally catching up with modern times.

I have heard even Java is going to catch up with modern times with Valhalla and maybe even futuristic later on, where a directory tree of source files can be compiled without weird twisting of java command line.

Not paying attention to the AOT compilers available since 2000, or optimizations available on IBM J9?

I guess it is a thing, after all Go folks tend to ignore the experience from other programming communities.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact