Hacker News new | past | comments | ask | show | jobs | submit login
Go generics are not bad (lemire.me)
232 points by ibobev on July 8, 2022 | hide | past | favorite | 286 comments



What I still don't understand is why golang has exceptions for "language constructs" like make() and append()... while those are unimplementable in golang.

It's pretty annoying that golang embraces static and functional patterns in its core, but it doesn't even have map/filter/reduce/forEach as generic implementations for all data types.

The lack of a "new" keyword and default values for struct's property syntax is also pretty annoying. Let alone the "hack" with annotations for decoding/encoding everywhere, just so that you can use reflect API in the encoder implementations.

For loop fatigue in golang makes it so unreadable in practice, and leads everybody to offloading manipulation of arrays/slices to helper methods, just to have the for loop there.

The library that fixes this as good as possible with generics is the "lo" library (inspired by lodash) [1]. It's worth taking a look at it if you're curious how to implement it for your structs.

[1] https://github.com/samber/lo


> but it doesn't even have map/filter/reduce/forEach as generic implementations for all data types.

Generics landed in the latest stable version, 4 months ago. Before doing too many too early additions in the stdlib they want to observe how generics are used in the wild. I don't have the link handy, but it was mentioned in one of the generics discussions (issue on GitHub).

In these cases (another case was error wrapping prior Go 1.13) they start with experimental libraries in `golang.org/x/exp/...`, which they did here as well:

https://pkg.go.dev/golang.org/x/exp/slices

https://pkg.go.dev/golang.org/x/exp/maps

So while the stdlib doesn't have the functions you're looking for yet, this might change over time.


>they want to observe how generics are used in the wild. I don't have the link handy, but it was mentioned in one of the generics discussions (issue on GitHub).

The rationale can be found here:

https://github.com/golang/go/issues/48918


There’s a whole lot wrong about this comment, but the most obvious is the claim that Go lacks a new keyword. Firstly, Go has a new keyword and secondly who makes a big deal about whether or not a language has a new keyword?


I believe what the commenter is complaining about is that you can't specify default values through a constructor that is invoked with the new keyword. I personally don't see this as a huge concern, since you can just make a constructor function, but it's definitely a valid concern because you could still allow people to shoot themselves in the foot when using a struct literal.

Go's paradigm of making the zero-value something meaningful is great, but it isn't always viable unfortunately. Even in the stdlib there are a couple of structs where you're better off changing those values.


Sure, allowing struct initialization without explicitly setting fields is a foot gun. Ideally Go wouldn’t assign meaning to zero values and instead require each field to be explicitly initialized a la Rust. This is one of the things Go got wrong and Rust got right.


in Go ‘new’ is a function, not a keyword

I think that parent probably refers to the lack of something like this:

x := new MyType


This requires an exception mechanism to handle allocation failure. This issue pervades C++ when exceptions are disabled. Important error handling goes unhandled.


How does Go handle allocation failure? In my understanding objects may be allocated on the stack or heap per the compiler's whims. If a heap allocation fails, you just get "fatal error: runtime: out of memory" and an abort.


Your understanding is correct. That's also what you will effectively get with C or C++ on Linux anywhere you would have had a choice between C++ and Go to behind with.


Since when do heap allocation failures on Linux result in an abort? In C, malloc(3) will return NULL on failure and set errno accordingly. Sure, if overcommit is enabled, you might get a fault if you try to access memory that was allegedly allocated, but there is no strict "malloc failure === fatal error" relationship.


malloc never fails on normal Linux configurations except in very rare instances not applicable to this discussion (e.g. allocating a single structure larger than your virtual memory space).

> if overcommit is enabled

This is the case on ~all systems.


Can’t hate on generics, so we need to find something else.

Go takes away type system fidget spinners and leave devs with nothing to do but the actual job, which they hate, and thus they hate Go.


Go helps devs by creating extra work like checking for inner nil and outer nil, hitting the "if err != nil { return nil, err }" button on their keyboard, using reflection to instantiate unexported structs written by library authors who decided nobody should ever be able to make an interface-compatible wrapper that adds capabilities to their library, and relying on the language to enforce the checking of error return values, which it won't actually do if you don't assign any of the return values to variables.


There is no “inner/outer nil”. People just need to understand how interfaces work (they’re a reference type and sometimes they reference another reference type, so just like any language with nullable pointers/references, either the pointer or the pointee can be nil). 101 level stuff. There are valid criticisms of Go (I’m not a fan of nil in the first place, for example), but this inner/outer nil myth needs to die.


Sure, if any method anywhere takes a pointer receiver, then anyone can declare an interface that matches that method, and that interface is (among other things) like an Option<Option<T>> in Rust or a ??T in Zig. It's weird to post "There is no inner/outer nil" since the most straightforward interpretation of that claim is that there is no None/Some(None) distinction, and this is straightforwardly not true.

For some reason, most languages do not encourage the users to create types that are similar to Option<Option<T>>.

Have you actually taken this stuff to a group of first-semester CS students and had them understand it and think it was a good design?

In the stdlib there's this function:

  // String returns the source text used to compile the regular expression.
  func (re *Regexp) String() string {
   return re.expr
  }
which takes a pointer-to-struct receiver and does not check it for nil before using it. So any program that declares a Stringable interface, diligently checks Stringables to make sure they are non-nil, then calls String() only on the non-nil ones is allowed to panic on this line from the stdlib because re == nil. The wonderful tutorial[0] teaches beginning go programmers to write the same sort of bug in the first lesson about interfaces.

[0]: https://go.dev/tour/methods/9


> the most straightforward interpretation of that claim is that there is no None/Some(None) distinction, and this is straightforwardly not true.

That’s not the most straightforward interpretation, since it would invalidate the original criticism that I responded to (that Go more-or-less uniquely requires you to check inner/outer nil).

> For some reason, most languages do not encourage the users to create types that are similar to Option<Option<T>>.

Lots of languages “encourage” references to reference types (including Rust), but many of the ML family including Rust have non-nilable references. Go’s error (and that of many others including Python, Java, C, JS, etc) isn’t that it supports/encourages references to references, but rather that all references are nilable.


A difference between Go and Java is that a null Regex in Java will not be implicitly converted to a non-null Stringable containing a null Regex.

If you'd like, I can post about "implicit conversion from nil pointers to non-nil interfaces containing nil pointers that can only be checked for nil using reflection, combined with interfaces that work by duck typing and an existing body of code that pervasively uses struct pointer receivers in methods that panic when passed a nil pointer, even if it was a nil pointer contained in a non-nil interface" but that's a bit of a mouthful.


> A difference between Go and Java is that a null Regex in Java will not be implicitly converted to a non-null Stringable containing a null Regex.

You have it exactly backwards--you're arguing that Go should implicitly convert a not-nil interface (that is implemented by a nil pointer) to a nil interface so that Go interfaces behave more like Java interfaces, but Go isn't doing any implicit conversions (unless you consider "assigning a concrete value to an interface" to be an "implicit conversion", but Java does this too).

> implicit conversion from nil pointers to non-nil interfaces containing nil pointers

Again, Go doesn't do any such implicit conversion, and you're arguing the position that it should. An interface is just a tuple containing a pointer to the data and a pointer to the type `(pdata, ptype)`. A nil interface is `(nil, nil)`, and an interface that is implemented by a nil pointer is `(nil, <pointer-to-concrete-type>)`. You're arguing the position that the latter should be implicitly converted to the former, but this would break code that uses nil as a valid value.

The root cause of the problem isn't how Go handles nil pointers in interfaces, but rather that Go has nil pointers, interfaces, etc at all. Go gives you no way to express that a reference type can't be nil, but ideally it would have an enum system like Rust's that makes optionality an opt-in property via `Option<T>`. If an interface `Bar` is implemented by `*Foo` and neither `*Foo` nor `Bar` can be nil, then this problem largely goes away. If `Bar` is implemented by `Option<*Foo>`, then `Option<*Foo>`'s methods need to handle both the `Some(*Foo)` and `None` (i.e., nil) cases.

> that can only be checked for nil using reflection

You don't need reflection:

    i, ok := (iface).(*int)
    if !ok {
        fmt.Println("`iface` is not an `*int`")
    }
    if i == nil {
        fmt.Println("`iface` contains a nil pointer")
    }
    fmt.Printf("`iface` contains a pointer to `%d`", i)**


> Go isn't doing any implicit conversions

What do you call it when you assign a value of one type to a variable of another type and the language creates a new value, different from the one you were assigning, and stores that new value in the variable? In C++ this is called "implicit conversion." Here's the tutorial again:

  var a Abser
  f := MyFloat(-math.Sqrt2)
  v := Vertex{3, 4}
  
  a = f  // a MyFloat implements Abser
  a = &v // a *Vertex implements Abser
The last two lines involve constructing a new value for the LHS of a type that differs from the type on the RHS without any function calls, constructors, "make" keywords, etc. What word would you like to use for this?

You could just as well have a patch that changes the type of a single variable from *Vertex to Abser or changes the return type of a single function from a type that matches an interface to the interface, and that would also create an assignment that creates a new value of the new type that will panic in contexts the old one would not have panicked, even though the programmer has not added any code saying "please construct an Abser (type, pointer) tuple from my pointer."

> Go should implicitly convert a not-nil interface (that is implemented by a nil pointer) to a nil interface

I am not arguing for that.

> unless you consider "assigning a concrete value to an interface" to be an "implicit conversion", but Java does this too

I guess, but this is much less of a conversion than Go is performing because the runtime value does not change at all. It's the same pointer, not a tuple containing the pointer. Unlike in Go, this assignment may very often be correctly implemented by a sequence of 0 instructions! Importantly, the meaning of "foo == null" is unchanged. This is not because Java performs implicit conversions of interface types to specific pointer types at the null check. It's because in Java interfaces are unable to express having a concrete value of a specific class which is null, because they are just pointers. One might say "Java interfaces do not have inner and outer nil values" as a shorthand for this.

> Go gives you no way to express that a reference type can't be nil

I agree that this would solve the problem.

> You don't need reflection:

You don't need reflection if you statically build an exhaustive list of all pointer types that match the interface in the whole program but don't actually match the interface in the sense that the methods don't handle nil, then generate code to handle each of them. This just doesn't seem like a reasonable option because sometimes people make libraries so they can't necessarily do static analysis of the whole program at the point where they are writing the code. It also sort of defeats the purpose of interfaces, I guess.


> What do you call it when you assign a value of one type to a variable of another type and the language creates a new value, different from the one you were assigning, and stores that new value in the variable? In C++ this is called "implicit conversion."

Right, I addressed this when I said:

> unless you consider "assigning a concrete value to an interface" to be an "implicit conversion", but Java does this too

So let's just skip to your reply to that bit:

> I guess, but this is much less of a conversion than Go is performing because the runtime value does not change at all. It's the same pointer, not a tuple containing the pointer. Unlike in Go, this assignment may very often be correctly implemented by a sequence of 0 instructions!

Yes, Java and Go have different implementations/conceptions of "interface". I don't see what bearing this has. From the programmer's perspective, Java's interface assignment isn't more explicit than Go's even though the underlying mechanisms differ. In particular, Go needs to create a new "header" value because its interfaces are polymorphic across any type (including primitives and value types!) whereas Java's are polymorphic only across objects and Java attaches a header to every object.

> Importantly, the meaning of "foo == null" is unchanged.

Go's `foo == nil` isn't changed either. It behaves like it does for all reference types. The difference between Java and Go is conceptual, but the `==` operator is coherent within each of them.

> This is not because Java performs implicit conversions of interface types to specific pointer types at the null check.

Agreed, and to be clear, I didn't claim it did. Nor does Go, so I'm not quite sure what you're responding to here.

> It's because in Java interfaces are unable to express having a concrete value of a specific class which is null, because they are just pointers.

> One might say "Java interfaces do not have inner and outer nil values" as a shorthand for this.

Right, but we were debating whether or not Go interfaces have inner or outer nils. Consider `var x interface{} = 0`. What's the outer value? And what's the inner value? The whole point of my comment is that you're conceiving of Go interfaces as having inner and outer values, but Go interfaces are just a fat pointer. As such, you can have pointers to pointers with Go interfaces. And while you can't have pointers to pointers with Java interfaces, you can elsewhere in Java so a Java programmer should be able to reason about Go interfaces because they can understand the concept of a nullable pointer to a nullable pointer.

Ultimately, it seems that you're criticizing Go for having a different interface mechanism than Java, and you get bugs when you treat Go interfaces like Java interfaces. And Go made the right choice here, because it can express polymorphism over a wider range of types than just objects (contra Java interfaces) and indeed it doesn't have any sort of object/primitive bifurcation--from the perspective of Go interfaces, all concrete types are just data.


> Go's `foo == nil` isn't changed either

This is incorrect. The value of "foo == nil" in function A can be changed by a patch that only changes the return type of function B from *Vertex to Abser. If you would like to say that this is not because of implicit conversion from pointer to interface and it is not because of any semantic difference between comparing the vertex pointer to nil and comparing the Abser interface to the nil interface value, I would like to know what you think causes it.

> Right, but we were debating whether or not Go interfaces have inner or outer nils

I haven't been debating this since it's obviously true. You seem to agree that an Option<Option<T>> may take on the values None and Some(None) separately. You can debate about what to call things, but I'm still not sure what you would like me to call the difference that exists between Java and Go which causes code that does not throw NullPointerExceptions in Java to panic in Go because the inner value is nil and the user has failed to use reflection (or generate an exhaustive list of all possible pointer types that could match the interface but whose method does not handle nil and check each of them individually).

> Consider `var x interface{} = 0`. What's the outer value? And what's the inner value?

What do you mean? That's a (int, 0) tuple. The type tag is int. The inner value is 0. interface{} expresses outer-nil-ness using a special value of the type tag, so that's (nil, nil) which a different value from (*Vertex, nil).

> And while you can't have pointers to pointers with Java interfaces, you can elsewhere in Java so a Java programmer should be able to reason about Go interfaces because they can understand the concept of a nullable pointer to a nullable pointer.

Please demonstrate by declaring a pointer to a pointer in Java. Ideally, you could also show how in Java to convert a pointer to a pointer to that pointer using a single assignment, without calling any constructors or functions or making any explicit casts.

> Ultimately, it seems that you're criticizing Go for having a different interface mechanism than Java, and you get bugs when you treat Go interfaces like Java interfaces. And Go made the right choice here, because it can express polymorphism over a wider range of types than just objects (contra Java interfaces) and indeed it doesn't have any sort of object/primitive bifurcation--from the perspective of Go interfaces, all concrete types are just data.

No, I'm criticizing Go for being fundamentally broken in a novel way that no language had come up with previously. It should seriously not be possible to cause a panic in a method call that is guarded by a nil check by changing the return type of some other function from a pointer-to-struct to an interface and making 0 other changes.

If you ask the language designers on the mailing list why it's like this, they'll tell you that it's important to make the zero values of types useful, including nil pointers of specific types. They also decided to ship a stdlib in which like half of the methods with pointer receivers immediately panic when passed nil, and they would like to tell you that the right way to avoid panicking is to not store such pointers in such interfaces. The language will do nothing to help you to avoid storing these pointers in these interfaces. Even if you successfully avoid storing these pointers in these interfaces in your code, if you store them in any interface at all, other code may use "interface upgrades" to find out that the methods that panic exist and call them. Wow!


Library authors not exporting their fundamental data structures is very painful, but this happens in many other languages too.


No it's simpler than that, they hate Go cause it's a half-assed language. The creators didn't respect developers enough to make it consistent and complete cause "devs are not smart enough". The argument about "getting things done" is like saying the language is Turing complete.


[flagged]


That was a generic argument, but still a valid one. Go is a language that indeed misses several features, including safety features, that help people perform their job better, and their excuses for that aren't exactly well thought. It is a systemic issue with the language, and criticising it ca feel like a cheap shot, but that's they way it is. It's nice that they're reverting their position on that (re: generics), but there's still a long way to go.

Also: just because someone isn't forced to use it, doesn't mean it's immune to criticism, no matter how vague it is. And also, some people are forced to use it at their day job. Again, just because they can quit their job, doesn't mean the language is immune to criticism. I would suggest some introspection if criticism of something you like is causing you to dismiss their arguments and attack them, no matter how bad they are. There was no attack to you.

I don't understand why this always happens in Go discussions. It's a language with a lot of potential, but people still stoop down to attacking others in order to defend it should be frozen in time.


> I don't understand why this always happens in Go discussions. It's a language with a lot of potential, but people still stoop down to attacking others in order to defend it should be frozen in time.

It's not about wanting it to be frozen in time, it's more about wanting to have an interesting discussion. Repeating the same mantra in every Go thread is cheap, tiring even.

Before it was generics, now is some vague 'long way to go' thing. Again, I'd be happy to see proposals, arguments etc, but 'Go sucks' isn't original or interesting anymore.


Who cares if it’s cheap and tiring, it’s the truth. When it was missing generica, saying so was also called a “cheap shot”.

There’s plenty of proposals of thing that can be improved in this thread alone. Not every post has to contain them to be valid criticism. Sure it could have more but that doesn’t warrant personal attacks.


> Who cares if it’s cheap and tiring, it’s the truth.

You do realize that is a matter of personal opinion, right? There's plenty of developers who like the language, (doesn't mean they're not seeking to always improve it), then there's those who hate it. But a comment from someone who loves Go but says nothing more as to why would be equally as useless as a generic comment from someone less found of it.

They're free to continue posting such comments and I am free to explain why I find them useless. Simple as that.


No. It's not a matter of personal opinion. Go indeed lacks "several features, including safety features, that help people perform their job better". Whether you're one of those people and whether the language really needs those features is a different issue. But when it comes to modern languages Go stands out for lacking several important features. And of course no language "needs" most features: as long as it's Turing Complete it can do anything. But Turing Completeness is the lowest bar possible, and in the real world you need more.

What people are trying to have here is starting discussion of possible paths for the language to grow, no matter how crude that "start" is. It is "cheap" criticism? Sure, but it is valid criticism to the language. Sure, you can call those comments useless, but that's not what you're doing. What you're doing instead is taking language criticism personally, and then directing personal attacks at people making them. That's why your post was flagged, btw.


> No. It's not a matter of personal opinion. Go indeed lacks "several features, including safety features, that help people perform their job better".

That is a subjective statement. There's no objective list of language features that make a programmer's job better. I'd like to see optional types in Go, but I can't say that it would objectively make the vast majority of programs I've written better, not to say about the wider community.

I liked the error handling proposal from the Go team, but the wider community rejected it. Was it 'objectively bad'?

I could go on and on.

> What people are trying to have here is starting discussion of possible paths for the language to grow, no matter how crude that "start" is. It is "cheap" criticism? Sure,

My problem is exactly that this is not the case. You don't start a discussion on 'possible paths' towards improvement without actually discussing any concrete path.

Take a look at Swift for example, which even its original author stepped away from due to in part constant onslaught of features at the expense of wider design considerations, (as he perceived it).

Go could certainly go faster, but I do think the 'how fast' is a delicate balance that doesn't have a simple answer.

It merely looks like the typical HN mantra on programming languages. It's like posting about Rust in every thread on C++. I am someone who happens to like Rust but I don't think doing that would win us many fans.

In many ways OP's criticism is even less concrete.


> There's no objective list of language features that make a programmer's job better.

Quite the contrary, the list is often very objective. People know what makes them more productive. It’s just not a single list for everyone. This is why people engage in language criticism: to push for features THEY want.

You say “don’t use the language”, we say “we use what we want, maybe you don’t use the features”.

> You don't start a discussion on 'possible paths' towards improvement without actually discussing any concrete path.

Read the whole thread. It started with concrete paths: functional operators like map/reduce/etc.

The suggestion was promptly dismissed by the people against changes in Golang by calling it “type system fidget spinners”. That’s a lowbrow dismissal and an attempt to nuke civility.

The poster you attacked was replying to that. So they must “propose changes” on every single post, even on meta posts replying to your crowd?

The only people behaving badly here so far are the anti-change people.

This is what I’m addressing. I’m not asking for features. I’m asking for your crowd to stop behaving like children.


> Read the whole thread. It started with concrete paths: functional operators like map/reduce/etc.

I did and did not find your characterization accurate.

> The suggestion was promptly dismissed by the people against changes in Golang by calling it “type system fidget spinners”.

There was some of that, but there was also a genuine and accurate response to the suggestion saying that with generics now being part of the language it is a case of getting them settled, (i.e. the exp/constraints package) and see what could they be utilized for in the standard library. What you're asking for already exists as 3rd party packages so as generics are adopted in the stdlib in 1.19 and later there's no reason not to have map/filter/reduce in the stdlib as well, there's certainly no opposition to it from the core contributors or the community on the whole.

The fact that the focus is on the 'fidget spinners' comment and not the one explaining the situation in detail is curious indeed.


The reason I'm focusing on the fidget spinner guy and you because those are a grandparent of the posts we're currently writing and replying to. If I wanted to reply to the other posts I would do so below those posts. I merely upvoted PhilippGille because he provided a good answer and continued with my day.

We're in a chain of comments. I suggest clicking on "parent" until you get to the post I'm talking about and you'll understand better.

Just because someone else was civilised once in an unrelated thread (and, as far as I know, PhilippGille is not affiliated to you) doesn't make YOUR behaviour right. Start owning up your mistakes.

The explanation from Phillipp is satisfactory but isn't part of this thread, and doesn't excuse the lack of civility and personal attacks from you. Nor does it erase the fact that what you're asking for (a discussion starting with what can be improved in Go) was provided from the get-go.

Seriously: the only real issue here is YOUR lack of civility. Not Golang, not the people criticising it. Own up your own mistakes. Maybe work on your issues too.

Rather than picking apart and dissecting my replies and other people's, maybe try addressing YOUR behaviour at least once. You're the one whose post was flagged. It's not justified by anyone else's post.


The parent to my original comment states:

> No it's simpler than that, they hate Go cause it's a half-assed language. The creators didn't respect developers enough to make it consistent and complete cause "devs are not smart enough".

If that isn't the definition of flamebait and a personal attack, I don't know what is. It certainly is far enough from a measured, concrete suggestions for improvements to the language.


Once again, this comment was crude but factual, and it was answering to a bait/attack post in the first place. And go is indeed a simpler language by philosophy that is widely criticised for lacking certain features that a lot of people consider important, and it is made for productivity according to the authors. About the "not smart enough" part, it's not a real quote, but isn't quite the exaggeration. Here's a real quote by Rob Pike:

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt" – Rob Pike @ Lang Next 2014, From Parallel to Concurrent

Now read the parent of that post, about the "type system fidget spinner" and you'll see what a flamebait and well disguised personal-attack looks like. Or yours, to see a real personal attack that got flagged and is now marked as dead.


> Go is a language that indeed misses several features... that help people perform their job better... It is a systemic issue with the language

The only language in which "misses several features" is not "systemic" is C++, and having every feature seems also to not really help anyone do their job better.

I'm sure your language of choice is perfect, though.


Missing features is not the problem. A language is just as much what it has as much as what it doesn’t have. But the actual feature-set and lack-of-features Go has for it is ripe for plenty of valid criticism as is.


> the actual feature-set and lack-of-features Go has for it is ripe for plenty of valid criticism as is.

But this, as a criticism of Go, is vapid. Level the actual criticism, not a vague gesture towards the possibility of it!


The post that you're all talking about was replying to someone who attacked everyone criticising Go and called advanced PL features "type system fidget spinners".

Addressing this kind of post doesn't require providing examples of how Go can grow. Also, I don't see why is the bar for criticising Golang's stagnation so much higher than the bar for people that want the language to never change.


Bullshit. “Misses several features” doesn’t mean “misses all features ever invented”. What an absurd interpretation. There are several languages in between Go and C++ that suffer from neither problem of having too much or too little.


Your lack of specifics means you’re more interested in language bashing than discussing the relative value of features for different contexts.

It’s been almost two decades of increasing memory latency relative to clock yet Java still lacks value types? Unusable!


Alright, so you went from twisting my words to claiming I can't have an opinion unless I provide specifics. This thread already started with specifics: functional operators like map/filter/reduce/etc. What I'm addressing are personal attacks by Golang luddites.


That’s actually a brilliant assertion (sorry for the crap pun).

The majority of developers I work with are only aware of the success paths of their code. That means we’re knee deep in exception corpses because they avoided doing part of their job.

Go makes them deal with it.


When we were rewriting a module in our backend from PHP to Go, explicit error handling helped us find and fix bugs due to edge cases we weren't even aware of. The usual idiom in PHP was to just catch exceptions at the top level and show an error (I think that's what exception handling usually devolves to). Since in Go we were explicitly handling errors at every level in the chain, it forced us to reason about error conditions more carefully and made us more aware of potential edge cases, which didn't happen when we were writing in PHP because you only saw the happy path when looking at the code (i.e. little mention that something can go awry because exceptions are hidden from control flow). I remember in one such case, we found out you couldn't just bubble up an exception, you had to process it immediately, otherwise data could end up in an inconsistent state.

However, repetition can introduce bugs, too, for example a very common mistake is to write this:

  if err != nil {
    return nil
  }
instead of this:

  if err != nil {
    return err
  }
which is pretty serious because you essentially ignore the error.

This particular pattern is so common your brain often ends up completely skipping it and it's pretty hard to detect during code review.


Now I don't know how your PHP code was structured, but in my experience there usually three types of mistakes done with error handling in PHP code

1) not properly checking return values from standard library functions

2) not checking correct last error level function after use of standard library function, e.g. json_last_error(), curl_errno() etc

3) different handling of old style errors vs exceptions

This is an unfortunate legacy of PHP, it is messy and error prone.

I'm not a golang programmer but to my understanding is that the golang compiler does not enforce strict error handling, if a function changes its signature to start returning an error when it previously didn't is not something the compiler will warn about. This is unfortunate too.

I never really been a huge fan of exceptions either, but the good thing about exceptions is that they are not silent by default whereas return values can be ignored and therefore errors can missed.

What I'm trying to say is that we need stricter compilers.

My personal style of coding in PHP to avoid hidden bugs is to code in defensive style, where the unhappy path is as important as the happy path, combined with strict type handling where I try to use the type system to detect bugs.


>if a function changes its signature to start returning an error when it previously didn't is not something the compiler will warn about.

This is why we use linters, they can catch this kind of problem.


Sure, Go makes you write code to do something in the case of an error - over and over again at every level in the chain. Which tends towards programmers putting in minimal effort each time, often meaning when some low level error occurs in the resulting application little information is forthcoming to the user as to why it's just aborted on them. Good and bad error handling can be done in any language. I've not seen any evidence that Go apps and libraries generally do it better than those written in exception-enabled languages.


Not really.

Rust, Haskell, Kotlin, yes. They force you to consider the error path.

Go makes it all too easy to just ignore errors and produce crashy code.


Go makes it explicit where the error was ignored so you can go and wring the neck of the bastard who ignored it :)


Sure, but the compiler doesn't care, and that's the problem.


Generics are useless when there is no way to integrate them with your rest of the code.

As golang has no instanceof or typeof keyword, you practically you cannot use a method that uses multiple types as arguments without having to use the reflect API, just to have an if/else branch on what to do with that argument.

The things that I mentioned are from another point of view where language designers and implementers decided to implement the interactions with their data types in a different way, by providing overloadable methods with different function signatures for the same function name whereas in golang everything must be "namespaced" in a sub sub sub package.

Good luck finding the right methodology where to sort in a method that can handle two different structs. If they're nested it's not even possible cause golang's dependency resolver doesn't allow cyclic dependencies.

Parsing html5 in golang is a nightmare, and I've realized that this language is not made for such a use case.


>As golang has no instanceof or typeof keyword, you practically you cannot use a method that uses multiple types as arguments without having to use the reflect API

You don't have to use reflection, there are typecasts:

"value, ok := arg.(MyType)"

You can also switch on a type. How is it different from instanceof in practice?


The point of described scenario was to have a _different_ if/elseif branch, and not the identical behaviour for one struct type.

If item instanceof HouseStruct {}

Elseif item instanceof ContainerStruct {}

With your suggested solution you end up with OOP fatigue and GenericBuildingImplementationStruct which is probably not what golang wanted to embrace with their type system.


I'm not sure I follow what you mean. In Go the equivalent to your example is:

  switch val := item.(type) {
    case HouseStruct:
    case ContainerStruct:
  }


Doesn’t type switch require an interface value? Go tries not to add boxing-like overhead to generics.


What is for loop fatigue?

Seems like a fake problem that one would only feel if they buy too much into functional programming.

Some ideas in fuctional programming are useful, but overall, for loops are much more preferable to map/filter/reduce.

For loops are much more easily amenable to incremental changes.

"reduce" is particularly bad. It obfuscates control flow for no benefit at all.


>but overall, for loops are much more preferable to map/filter/reduce

I strongly disagree on this one. A .map or a .filter are clear and concise while filtering or mapping, particularly in a streaming situation where you're only iterating the collection once.

You can trivially write code using this paradigm that's easily readable, performant, and takes up a handful of lines that would take dozens in Go.


Either you start making a language from a sound theoretical foundation and then implement it (Koka springs to mind) or you implement a language from some syntactic gripes and then try to find the theoretical foundation later… Go was impressive in the practical sense (compiler speed, channels in std and good tooling) but a shit show otherwise. I’ve completely lost any confidence that FANG will be able to make some great language because they always take the easy way (makes more business sense, see Dart). Academia on the other hand will make great languages that never reaches critical mass.


Ian Lance Taylor worked on generics for like a decade, and the final design was heavily based on the Featherweight Go paper, which had formal proofs of the design given a simplified version of the go language. This is a very odd complaint around this feature.


Yes retrofitting is hard. However, "Generics", also known as parametric polymorphism is well understood and goes back to Milners ML from the 70s.


While something may be well understood in an academic sense doesn't mean that the application of that understanding to implementation humans will take a liking to is well understood.

If you look at the programming language landscape I think one wouldn't be unreasonable in pointing out that very little is understood in terms of how you go from good ideas to widely adopted language.

What affects reality is important.


> doesn't mean that the application of that understanding to implementation humans will take a liking to is well understood.

I really like how ML implements it, does that make me somehow less human?


What you like or don't like isn't really the point. The point is that it isn't unimportant when judging how "good" a language is to take into account how many people are willing to use it. It is hard to argue that a language is "good" when a negligible fraction of developers are willing to use it.


As Alan Kay said, the industry is a pop culture, driven by marketing not science. I'm still going to argue for good ideas though.


If one is to come up with an excuse for not understanding people, one should pick something that at least sounds plausible.


Both industry and academia are making languages according to their own incentives and criteria.

Industry language designers are trying to make their users more productive enough to justify the cost to design and build the language. An industry language that doesn't have enough users and doesn't make those users' lives better enough to offset the salaries of the language team is a losing proposition and a bad language.

Academics are trying to publish papers on novel ideas. A language that cured cancer would be considered a bad language if someone had already designed a language that cured cancer the prior year.

These incentive structures a somewhat opposed: the easiest way to make a language easier to adopt is to make it more familiar and less novel. But they are somewhat aligned as well. A language with no new ideas can't be any better than the languages before it.

So there is a natural diffusion process where academia explores the frontier of weird language ideas. Many don't pan out. All of them are weird and unapproachable when they first appear. The ideas worth keeping stick around. Eventually an industry language will add a dollop of those novel ideas and surround them with a familiar framework so that users can actually absorb and use the thing.


> Academia on the other hand will make great languages that never reaches critical mass

Interesting choice of words: my guess is that the great languages Academia cook-up scale poorly on code bases which receive tens of updates per day (let alone hundreds or thousands). There's usually a huge difference between industrial tools and artisanal ones.


Well, let's look at the case of Facebook Messenger's webapp, which is written in BuckleScript (now ReScript), a language based on OCaml. That's about as solid as you can get academically. Here's what they said:[1]

> Full rebuild of the Reason part of the codebase is ~2s (a few hundreds of files), incremental build (the norm) is <100ms on average. The BuckleScript author estimates that the build system should scale to a few hundred thousands files in the current condition.

> Messenger used to receive bugs reports on a daily basis; since the introduction of Reason, there have been a total of 10 bugs (that's during the whole year, not per week)! *

> Most of the messenger core team's new features are now developed in Reason. Dozens of massive refactors while iterating on ReasonReact. Refactoring speed went from days to hours to dozens of minutes. I don't think we've caused more than a few bugs during the process (counted toward the total number of bugs).

In a high-velocity codebase, modularity is the name of the game. And you will be hard-pressed to find languages outside the ML family which are that good at modularity. OCaml is just about the only industrial-ready language in this space.

[1] https://reasonml.github.io/blog/2017/09/08/messenger-50-reas...


Don't buy that propaganda, however well-intentioned it is.

At the time when that post was written (including the "no bugs") the web version would continuously fail in the simplest of tasks while having a fraction of the features of the app version.


Any moderately complex app is made up of so many moving pieces, I'm amazed that you can pinpoint those failures specifically to the web frontend piece.


I can't find what I wrote at the time, but there were multiple UI/UX issues and logic bugs.


In any case, the point I was making was that languages from academia potentially could scale up in high-velocity and larger teams.


They must have changed how they report bugs, because the number of issues messenger.com experiences is definitely more than 10 a year. Also super slow.


Aren't Reason and ReScript different languages? It's confusing which one they're talking about here.


They are now, at the time the post was written ReScript did not exist and people used to refer to the combination of BuckleScript + Reason as 'ReasonML' for simplicity.


> my guess is that the great languages Academia cook-up scale poorly on code bases which receive tens of updates per day

Why would they? Many are designed meticulously to address the issues industry has. I've worked on a multi-million line Haskell code base in a financial institution and it scaled better than e.g. C++ or Python. In 30 years time, I'm sure the industry will be extolling the virtues of algebraic data types, purity, immutability and referential transparency.


God what's the compilation time of millions of line of Haskell?

> In 30 years time, I'm sure the industry will be extolling the virtues of algebraic data types, purity, immutability and referential transparency.

There are plenty of industrial languages with algebraic data types and immutability. Not as many as I would like with purity but I doubt it will take 30 years.


> God what's the compilation time of millions of line of Haskell?

It wasn't compiled with GHC, but a proprietary compiler. It could typecheck the entire lot in a few minutes (in parallel) which was usually enough to detect breaking changes.

> There are plenty of industrial languages with algebraic data types and immutability.

A few do, yes, certainly not plenty. But defaults matter and immutability means more than just "const", e.g. persistent collections.

> Not as many as I would like with purity but I doubt it will take 30 years.

Given that big tech is delivering languages that are basically the same as Algol 60, I honestly think it will take at least 30 years.

http://cowlark.com/2009-11-15-go/


Purity is nice and clear to reason about. But when you want the last drop of performance, having a place to overwrite memory as you invoke a SIMD algorithm or cast and do bitwise stuff is invaluable from a performance point of view, whether we like it or not.


Such a function can still be pure on the outside. This is real (state) encapsulation, not the encapsulation sold to you by OOP vendors.


Ok but there was discussion about Haskell so I was sticking to that... of course if you can have functional as an extra without preventing imperative in the low level then you can apply the best of both worlds.


Haskell let's one write fast imperative code when needed or call C functions.


Go's biggest problem is that no one at Google dares to question design decisions. After all, it's Rob Pike and Ken Thompson! What was the last time they had to write and maintain a modern, large software system? Go is great to fool around with, but at scale, it becomes a nightmare. Most of your code, first of all, is error checking. Because new rule - exceptions are suddenly bad (they are not).

It's really hard to know what you are doing right or wrong when you are writing a large program. There are no guidelines for proper layout, logging, and error checking practices, and no matter what you do, it's verbose and error-prone (more code = more bugs). It's all DIY. I referred to HashiCorp code to see how they do things, for example.

As someone here said once - Go is basically improved C, after snoozing through four decades of language development. On one hand Go lets you go wild, and on the other it imposes rules that are seemingly arbitrary, like enforced naming conventions, but only in certain places.


Go is odd in that people constantly complain about what it cannot do or does incorrectly, while having become one of the most prolific languages at the same time.


C++ is wildly prolific, and yet it is under constant criticism. Most people in my experience (even C++ proponents) would agree that much of that criticism is reasonable, even if there's a subset of C++ that can be carefully used to create reliable software.

Go is odd in that its proponents seem to flatly deny any of the very reasonable criticisms levied against it. And they continue to do so right up until the moment that the language authors finally address them.


Makes it seem like we’re living in a bubble while others are getting stuff done, isn’t it?


Go 's compilation speed is only impressive for newer generations that never used compilers for Pascal dialects, Modula languages, BASIC,....


Right. My view is: Go brings back a lot of the great traits of languages like Pascal and Modula. Fast compilation speed, a simple static typed language, a modules concept. Adding a few nice things like GC, high order functions and some more.

I feel, programming too often lost productivity with quite a few modern languages. Go brings that back and adds a good amount of modern features, but doesn't get too much distracted by them.


How does it bring back productivity when it has the expressivity of C (at least before generics, it’s a bit better afterwards)


C doesn’t have methods, interfaces, closures, newtypes, packages, map or safe slice and string types, or safe automatic allocations.

What bad faith nonsense.


And before generics it could not even implement maps in the language itself, hence my comment.


I do not think this is true. Async programming via channels in Go is a big win and way more expressive than what any C library can give you. Also, higher order functions and interfaces are good. On top of that they added generics. I think that it is more than enough of a practical language already.


True but they didn't have to deal with the complexity of the modern software where a typical project can pull in thousands of dependencies (transitively).


That is taken care by using binary libraries, see Delphi or Eiffel.


I’m pretty sure Go’s maintainers were aiming for “mere” practical success, as opposed to some theoretical success that only academic languages succeed at.


What's the definition of "success" here? Popularity?


As I understand it, Google's rationale for Go is to enable them to hire a lot of say, good-but-not-great fresh CS graduates with little or no practical experience and produce software at their scale without putting those new hires through a multi-year training programme.

So the definition of success is that Go makes those people productive. If you give these fresh hires C++, now everything is on fire and even experienced people sent in to extinguish the flames end up just saying evacuate the entire site and let it burn itself out because that will be cheaper than diagnosing what's wrong.

This goal means some choices Go makes which I don't like seem more respectable. For example Go specifically warns you not to get clever. This is sound advice for a 22 year old new programmer who still probably thinks "Romeo and Juliet" is about true love, but it'd be very frustrating if you were trying to squeeze the last drops of performance juice out of a system. The answer from Google's point of view is that Go is the wrong language in that case and it's time to rewrite.


Which is ironic given the whole plethora of hiring practices at Google.

Also Go is mostly used externally, Google's key projects are all based on Java and C++, even Kubernetes would have stayed in Java if it wasn't for some Go folks joining the project and pushing for a rewrite.


It is not ironic if you have ever interacted with these people. They suck at everything outside of leetcoding.


> As I understand it, Google's rationale for Go is to enable them to hire a lot of say, good-but-not-great fresh CS graduates with little or no practical experience and produce software at their scale without putting those new hires through a multi-year training programme.

From my discussions with employees at Google, the rationale for Go is to keep a large staff of promising CS graduates busy on dead-end projects (while preventing them from causing too much damage) until something takes off and they need to actually put some of that spare headcount to use.


I think so. I wish OCaml or Clojure had become popular like Go and Rust when they were the fresh thing of the day..


Something like “Optimized for software development in the real world”. Wide adoption is a consequence/indicator of this optimization, not the definition itself.


"Shit show" is a bit imprecise. What would you say are the major shortcomings in Go and how do they manifest as problems in practical reality?

Also, it would be interesting to hear your take on why you think languages that are, as you say, from "a sound theoretical foundation" tend to struggle with adoption?

Could it be that you fail to take into account that the things that make a language appealing in an academic sense may not make the language a particularly practical language?


At least in the case of PLT Scheme, I recall a conference where there was open hostility to the notion that their academic language might (horrors!) become widely adopted.


And understandably so!

Wide adoption means certain academy-hostile rules: backwards compatibility, a lot of user pressure towards certain directions, etc. Academics need freedom, and adoption is not about freedom.


Haskell seems to have muddled through catering to both, though! Not sure whether they see Racket as still in that researchy space.


Does Haskell make it through your filter, with its multi-disciplinary foundation and mix of industrial and academic sponsors?


> start making a language from a sound theoretical foundation and then implement it

ADA may be up your alley. it was fully designed and specified before a single line was written.


It sounds like: one of those is constrained by reality, the other ain't


For the working programmer do you really want theoretical soundness? Though I have background in formal mathematics, when it comes to day to day, and what pays the bills, what I might need to operate in anger, I much much more care about legibility, debuggability, inspectability, easy to implement X (without a stdlib if need be), easy to test, no footguns, PL respects programmers (btw go fails most of these)


I personally do. At my company we recently finished adding types to some libraries that were a core part of our system but were deemed "untypeable" by the maintainers of the libraries themselves (we're contributing back), and only band-aids existed. We needed several advanced typing features in our languages for that that weren't available for us a couple years ago.

Each time we added more advanced typing to new sections of the system, new bugs and security issues were found. At the end there were over 100 places that were being used incorrectly. This is something that linters and the library itself failed to catch for years, but with proper typing we got there.


That's not theoretical soundness, that's actual correctness. I'm mostly talking about things like io monads.


If your code can run IO operations synchronously and branch on the result of previous operations, you've already got an IO monad. The point of being explicit about that is that it allows you to check that a given piece of code can't do those things. Keeping database calls out of the render loop, for instance, is very much a matter of actual correctness.


In most languages with an io monads your power of proving that x can or can't be done comes at the cost of hiding the business data inside of the monad type (that wraps it) which severely impacts debuggability and in many cases, code legibility and understandability of the data you care about -- because there might be a business logic error elsewhere that you need to track down that no amount of formal correctness will save you from.


The decades-old solution for that is to not mix the business and IO code in the first place. If you do, that's entirely on you. This works exactly the same and has the same goals as in popular OOP architectural patterns like Clean Architecture, Hexagonal Architecture or Onion Architecture.

With the difference that with a better type system it becomes significantly harder to do this kinda thing to begin with, and that's a GREAT thing. Your business code is extremely simple, reusable, isn't intertwined with IO stuff and is 100% testable with minimal effort, without needing mocks or anything of the sort.

If you don't care about this kind of correctness, then fine, but don't pretend this is something that's not sought after by both OOP and FP practitioners.


Where did you get the idea that I practice oop? I exclusively use FP, and I use hexagonal-ish fp design. But question. Suppose you discover you have a logic bug and you write a pure logic test for it. But perhaps the path through the business logic was written by someone else, who has since left the company. It's a tax rule so you have to navigate through several layers of abstractions that are sometimes specialized for different countries, sometimes generalized. Maybe they didn't write all the tests that you'd like to see. You have one hour to deploy a patch because 9-5 customers are entering their work hours on a different continent whose tax rules are what you need to fix. How do you introspect the data that are moving through the code?


Where did I say you practice OOP? I'm just making a comparison to another paradigm in my reply to demonstrate that this is not an FP problem alone, continuing the train of thought started by hither_shores. Not everything is about you, friend.

About the "IO monad in business" issue: there's ways to solve it without having to pass IO through business layers or having to break architectural constraints. In the OOP architectures alone there's lots of patterns to avoid doing it already, and if you're doing it should all be in place, probably by passing a repository down to the business layer. In FP there are similar patterns to avoid needing a monad down in business code, passing down a repository-analog is enough. If you can't solve it without a "hack", then it's fine: that's just technical debt. That's on you to fix later. Or not, it's your work. About the question: gotta need more context for that, that's the vaguest thing someone asked me lately.

It's however one's own choice to decide how to deal with that. I prefer using constraints imposed by the typing system. You're free to do otherwise, but don't imply nobody wants it like you did in your first post, as there's people who find it useful. Once again, not everything is about you and you alone.


I don't see why you're assuming we did something different, I even said "advanced typing features". We have several effect-related types, including IO.


In my experience go is great at all of these, but then again I’m coming from c and then erlang…


I’ve completely lost any confidence that FANG will be able to make some great language because they always take the easy way (makes more business sense, see Dart).

I've found Dart to be perfectly adequate. Like Go it doesn't try to be overly clever; unlike Go it doesn't consider verbosity to be a feature, and believes you're capable of using advanced techniques like ternary expressions responsibly.


If you consider conditional expressions an “advanced technique”, that explains a lot…


I think in Java, you would need to do a type-class approach.

    interface Numeric<T> {
        T zero();
        T add(T a, T b);
    }

    static <T> T sum(Numeric<T> n, T[] v) {
        T summer = n.zero();
        for (int k = 0; k < v.length; k++) {
            summer = n.add(summer, v[k]);
        }
        return summer;
    }


This will work, but will have poor performance due to the way Java generics are implemented. You could use your function with `Integer[]` or `Long[]` arrays but not with `int[]` or `long[]`. `Integer` being a wrapper class around `int` is a regular Java object accessible via reference, hence slow and memory-inefficient.

On the other hand C# offers "real" generics and you can use almost exactly the code that you wrote (generic syntax is slightly different).


Type classes are on the roadmap after Valhalla [1].

[1] https://blogs.oracle.com/javamagazine/post/what-are-they-bui...


This is good news.

My wishlist for Java is for null to be illegal for all types unless they are encoded as nullable such as `T?`.

Until then, I just use Kotlin and/or Scala when I require JVM.


also in .Net but then you enter a generic hell and you start thinking vanilla es6 is better


C# is getting support for generic math in 11 with the addition of static abstract members (including operators) for interfaces.


AFAICT this is basically how numbers work in Haskell, and I see no particular problems with them. You usually don't need too many number types of different nature.


Generics are great when you need them, but they will cut you badly if you misuse them.

Generics are viral. When you make something generic, you often have to make the things that touch or contain it generic, too.

Generics also create tight coupling. When you change the definition of a generic interface/class, you'll need to update your usage across the codebase. As opposed to, say, adding a new field to a class, that can be safely ignored anywhere it isn't used.

When this component you're updating is highly connected to other parts of your code, perhaps add another generic parameter to it, it completely explodes and you have to jump all around your codebase adding generics.

The kicker is that you may be updating components which are themselves generic and highly connected, setting off secondary explosions. Pretty soon you're throwing that codebase out, starting over, and swearing to yourself that you'll never touch generics again.

My advice is to assume generics are a premature abstraction until you've exhausted what you can do with more concrete approaches.


>Generics are viral. When you make something generic, you often have to make the things that touch or contain it generic, too

Do you have an example of that? Can't you always "typedef" any particular generic type as a concrete type and work with that going forward?

>Generics also create tight coupling. When you change the definition of a generic interface/class, you'll need to update your usage across the codebase. As opposed to, say, adding a new field to a class, that can be safely ignored anywhere it isn't used.

I don't see how this is different from concrete types or interfaces. If you change a public API you may have to update callers. If you change internals you don't have to update callers. Perhaps you can show an example to clarify what you mean.

I'm not a huge fan of generics myself, but I think your claim is that generics force you to introduce unnecessary dependencies. I don't see how this is true on a logical level. Dependencies between compiled artifacts are a different matter, but that's an implementation issue and I don't think it's what you're talking about.


https://github.com/dotnet/csharplang/issues/1328

> Have you ever wanted an interface (or abstract type) to be parameterized on a polymorphic variable to its implementer, but not its users? Have you ever wanted to parameterize over internal state, without revealing what that state contains to your users? This is what existential types allow you to do.


I mentioned in a sibling comment, I just don't wanna write an example, sorry.

You can't typedef, or populate the generic in any way, until you've reached a point in your code where you have sufficient information to know what to populate it with. This can often be further from the point of the initial change than you might like, as I described. (ETA: I didn't realize viral implied that you can _never_ concretize, I only meant there's a tendency to pass it up the chain.)

Changing public APIs will cause secondary changes. Generics are coupled more tightly than some other kinds of changes. For example adding a parameter to a method in an interface does force you to update all those implementations, and then all the usages of those implementations, which is a lot of work and could potentially trigger a similar situation. But generally it doesn't bubble up as high. Because generics are more abstract, it's more difficult to populate that parameter - you're more likely to need to pass the buck by being generic over that yourself, which causes you to update more usages, etc.


Please show code then

I dont remember misusing C# generics even once and I struggle to see example of such a case


This is a reasonable request, but I don't really want to write a code example, that feels like a lot of work and I have 0 investment in changing anyone's mind about this (I didn't really expect this to be controversial - I was just sharing my experiences and expanding on Avlin67's comment about "generic hell"), but if you had a question I'd be happy to answer it.


The only example I can think of is the bifurcation non-generic collections vs. generic collections in the early days of C# when you had a non-generic collection and couldn't pass it to a method which expects a generic collection, but it was the consequence of the development history of C# and is not a problem of generics per se.


All of the things you said apply equally well to function parameters.


That's an interesting point. From a strictly theoretical perspective you're right. I think it's that generics are more abstract and have a higher blast radius. Eg, you add an argument to a method, you need to update usage of that method. You add a generic to a class, you need to update everywhere that class is used.

The fact that arguments are less abstract also I believe tends to prevent them from bubbling all the way to the top. Generics can often only be populated at the top-level usage. Function arguments I find don't usually bubble up that far.


> you add an argument to a method, you need to update usage of that method. You add a generic to a class, you need to update everywhere that class is used.

You're talking about two different things here, though. What if you add a generic parameter to a function? It'll often be inferred at existing call sites, but worst case, same as any other change to a function's signature.


The reason I'm relating them is that they're both changes to a class, that have different blast radii and different levels of abstraction.

In the worst case, adding a nongeneric parameter to a function could have the same impact. I've never heard of that happening though. I'm trying to express how I've observed things working in practice, not the theoretical boundaries of what could happen here.

So lets say you add a concrete parameter to a method. You update the usages. Somewhere the output gets stored in an existing class. So you add a new field to this class of the correct type. You're done.

Let's say you do the same with a generic. Now when you add that field to that class, it also has to be generic over that type. Now you need to update all the places where that class was used.

If your code is overly generic throughout, the likelihood of this having secondary or tertiary effects and having a runaway refactor becomes pretty darn high.

Being too abstract will always get you in trouble, and it'll probably look pretty similar. I'm just saying it's very easy to do with generics and harder to do with less abstract techniques.


> In the worst case, adding a nongeneric parameter to a function could have the same impact. I've never heard of that happening though.

Can you clarify this? I'm reading it as "I've never heard of anyone adding a parameter to a function" and that's so far from my experience that I'm either misreading or you work in a vastly different field than I do.

> Now when you add that field to that class, it also has to be generic over that type. Now you need to update all the places where that class was used.

Only if you need that to be generic too. If you change an int to a T, and you want to preserve the existing behavior for existing callers, they just call it as f<int>() instead of f(). Languages with good type inference will do that for you without changing the calling code as written.


Apologies, I mean that, if you came up with some kind of pathologically bad architecture, it could encounter the same failure mode when trying to add a parameter to a function (like, the callers need to add a parameter, and their callers, etc). But as you note, adding parameters to a function is routine, and I've never heard of this happening. I've definitely added parameters in a way that was tiresome and required me to go higher up the chain of callers than I would have liked, but not in a way that spun out of control.

I'm not really sure what to say at this point really. I think we're miscommunicating somehow. Would you agree that if we are too abstract with our architecture, we'll end up with a brittle and difficult to maintain architecture?


I do agree with that, and with the implication that one shouldn't add generics (or other abstraction) where they don't provide enough value for their costs.

But I'm confused because your example (adding a generic parameter to a function) seems to be an example of adding abstraction to code that did not previously have enough abstraction.


Yeah for sure. If we need more abstraction we need it, generics are just really, really abstract. I'm just saying generics should be a last resort. And if you find yourself with generics all over the place, you might take a step back and ask, did I make a bad architectural decision that will blow up in my face later? Can I do a medium sized refactor now to save myself a massive refactor later?

Coming from Python, I had a bad habit of premature abstraction. In Python, it's easy to be very generic at very little cost (not necessarily using generics - they exist in Python, but they're not "real" since Python is gradually typed). I thought of keeping things generic as "designing for expansion". Then I encountered the problems I've been describing, small refactors would turn into giant ones, and it was entirely unsustainable.

When I asked for advice about this, what I got was pretty much, "Oh yeah, that'll happen. Just don't use generics if you can get away with it." Initially that felt like a nonanswer to me, even a brush off. But as I matured in Rust I realized the advice was spot on, and that I had been abstracting prematurely.

I've seen techniques that can use generics well and actually make coupling looser, and that's awesome, and I don't mean to suggest that one should never use generics. I acknowledge I got into trouble by _misusing_ them. I'm just saying it's an unwieldy tool for special situations. It will rapidly expend your complexity budget.

The original context I was responding to was something like, someone says, generics are great until you get in generic hell, and then someone was like, generics seem fine to me. And I just wanted to explain how one gets into generic hell.


I am now fairly sure we agree and I just didn't like your example.


This is not the case with traits/type class approach seen in Rust/Haskell.


I'm afraid that it most certainly is, as Rust is the only language I have ever used generics in, and I had this problem.

Lifetimes suffer from the exact same problem as well, since they're really an exotic form of generic.

That said, I will readily admit that it was a lack of skill on my part. From talking to people in the Rust community though, I gather this isn't an uncommon experience.


Lifetimes are viral in Rust because 1. you can't abstract over them, 2. affine types are viral by design, but generics themselves are anything but viral.


They're not viral in rust - you can always replace the generic with a concrete type in super types.


Well that's rather the point of being generic with any language, isn't it?

You can stop being generic when you have sufficient information to stop being generic. If you're writing a library, that can easily bubble up all the way to the top, because you may never have enough information.

ETA: I think I was using a definition of viral that wasn't entirely correct. I thought it was a casual term rather than a precise one. But it seems like you're saying something is viral if you _must_ pass it on. In which case I apologize, generics are "semiviral" (I'm trying to introduce this term - if it already exists & isn't this, I apologize) - you don't need to pass them on, there's just a tendency to. The result looks very similar.


Rust also has associated types, which are exposed to the implementer, but not to the consumer.


I mean, generics and associated types aren't equivalent, but they also are often exposed to the consumer. Eg, if you're accepting an Iterator, you'll want to populate the Item associated type.


How is that different from any other part of a function signature? It’s part of the type system, of course changes have to be accounted for at use sites. But not doing so would be just incorrect.


Honestly I have no idea what's going on with this code.

Where is `zero()` defined? Why does a `Numeric` have a function to `add` two numbers? Shouldn't you add a single number to a numeric? What is the relationship between `<T>` and `Numeric<T>`? I thought `T` was a `Numeric`?

Edit: Upon closer inspection, I kind of get what's going on. But it's really cryptic for something that should be simple to express.


> Where is `zero()` defined?

It's defined by whatever implements the interface.

> Why does a `Numeric` have a function to `add` two numbers?

Because Java doesn't have user-defined operator overloading, so if you want to add stuff in a generic fashion you can't rely on `+`.

> Shouldn't you add a single number to a numeric? What is the relationship between `<T>` and `Numeric<T>`? I thought `T` was a `Numeric`?

`Numeric` doesn't contain data, it just defines operations on numbers. So `Numeric<Integer>` would define operations on `int`s, `Numeric<BigInteger>` would define operations on `BigInteger`s, etc.


Think of `Numeric<T>` as analogous to `Comparator<T>`. T is the type you're doing stuff with, `Comparator<T>` tells sorting algorithms how to compare two Ts, and `Numeric<T>` tells mathematical algorithms how to add two T's, and what a "zero" value for T should be.

There are probably other reasons to do this that I'm forgetting, but off the top of my head,

1. You can implement `Numeric<T>` or `Comparator<T>` for types `T` that come from a library that you can't change (so can't make T implement an interface that the library author didn't implement), or where you don't want to introduce a dependency (you could have a type class `JsonDecoder[T]` that comes from a library, and you don't want the library with your T to introduce a dependency on the json library).

2. You can have more than 1 implementation for a given T. For JSON decoders/validators, you might provide a decoder which bails out on the first error, or one which tries to continue reading fields so it can return all errors (field X was a string, expected number. Field Y was expected to be >= 1024, etc.). For comparators, you might have a `.reversed` function to easily make a comparator that sorts in reverse order. etc.


I think this may be good reference material: http://learnyouahaskell.com/types-and-typeclasses

Basically I'm encoding this approach in Java.


Isn't the java problem that (a) the + operator is only defined for some built-in types, and (b) the int/Integer boxing distinction?


Right, you can't overload the + operator, and primitives can't be used in generics.


yep, though a monoid isn't strictly necessary - a semigroup is sufficient for this use-case.


This seems very basic. Not to dunk on prof. Lemire, i know he has some great posts. This one is just barely scratching the surface of what makes a language good at supporting polymorphism or not.


If I was a researcher focused on developing efficient algorithms for numerical data this example would also seem to be one of the most critical for my work.


This is good to hear.

There's something like a cognitive bias that makes me leary of genetics (in any language). Once you have some cross cutting feature, generics or object orientation, or in functional programming, those functions of type 'a->'a, or assembly language address modes, people get bogged down by trying to make everything in their work generic or object oriented or "orthogonal". Lemire's example is relevant to his topic, but it could be a poster child if someone else did it. They need to sum an array of int. So they spend 4 days getting a function and a generic type that can sum every numerical type, rather than 15 minutes to sum int arrays.

What's this called?


> They need to sum an array of int. So they spend 4 days getting a function and a generic type that can sum every numerical type, rather than 15 minutes to sum int arrays.

More likely they install a 3rd party generic sum function and don't have to write their own at all. Well, in the case of a sum function it's probably only 15 minutes to write to generic version, and there may even be one in the standard library. But consider something like a concurrent hash map. In Rust, You just `cargo add dashmap` and you have one that works with any type that works with a regular hashmap. This isn't possible without generics.


>> They need to sum an array of int. So they spend 4 days getting a function and a generic type that can sum every numerical type, rather than 15 minutes to sum int arrays

> More likely they install a 3rd party generic sum function and don't have to write their own at all. Well, in the case of a sum function it's probably only 15 minutes to write to generic version

I'm not sure "sum over an array of int" is a great "simple" example - how do you guarantee that the result can fit in an int?

This is (one reason) why Julia implemented a full, scheme-style, numeric tower right in the core language.

Sure there are some examples where you can sum N Mbit numbers into an Mbit number, and either want overflow, or is somehow "confident" things will fit. But in general it's just lazy "no one will ever have such a large shopping cart"-design...


> This is (one reason) why Julia implemented a full, scheme-style, numeric tower right in the core language.

What's the result of adding two i32 then? Is it an i64? Doesn't the accumulator need to get wider on some additions, since adding an i64 to an i32 might also overflow an i64?


Actually, the default is overflow I now realize. But there is also automatic promotion, so:

    0 + typemax(UInt8)+ typemax(UInt8)
    510
While adding UInt8 first, then an implicit Int64 causes overflow:

    typemax(UInt8)+ typemax(UInt8)
    254
(ie: 255 overflows to 254 - or is promoted and sums to 510)

See: https://docs.julialang.org/en/v1/manual/integers-and-floatin...

Try: https://julialang.org/learning/tryjulia/


The sum example is simple, but very often have I seen people write general solutions for business logic or whatnot that you can't just get from a 3rd party package, when it wasn't at all clear that's needed. I have done this myself too.


> What's this called?

The blub paradox https://wiki.c2.com/?BlubParadox


I don't think that's what the blub paradox is. I think the parent comment is arguing the opposite. People who believe in the blub paradox are usually so smug and full of themselves that they don't pursue simple solutions.


The parent commenter is dismissing generics, and various other features, as something that "bogs people down" and makes them write worse code.

This is the essence of the blub paradox, thinking that people who use a feature are simply misusing it, after all in blub you don't have generics and things are fine. They're just making things more complicated than they need to be.

The blub programmer isn't necessarily smug, they just make excuses to dismiss more powerful language features that they are not sufficiently familiar with.


The problem with the blub argument is that it heavily relies on the existence of an undisputed power hierarchy among programming languages. Only s/he who is an expert in the most powerful language is in a position to pass judgement on all other languages.

I don't know how to resolve disputes over the relative power of programming languages on that basis. Are dynamic languages more or less powerful than static languages with advanced type systems? I don't know. What's the definition of power?

But I think what we can take from blub is that familiarity with one language or set of features is an insufficient basis for dismissing other, unfamiliar, languages or features.


Having used trait-based generics, I think that trying to piece together what's going on in typenum, the GAT stabilization process, https://hirrolot.github.io/posts/rust-is-hard-or-the-misery-..., num-traits, and variance and https://github.com/rust-lang/rust/issues/25860 is absolutely bogging me down. I feel (and observe as people try modeling contracts in traits) that trying to build code which plays well with open-ended type parameters is inherently more difficult than building a correct solution for a single type. Generally I'd prefer a more focused less generic solution which corresponds to a single function in the generated machine code (modulo inlining), except in cases where generics (over a small or open set of types) provide a clear benefit (collections are a clear example, IMO point-free .iter().map().sum() style is usually a regression in readability).


I'm not dismissing generics or object orientation or orthogonal addressing modes or 'a x 'a -> 'a functions. I am observing that there's a trap that a lot of people fall into, where they spend inordinate amounts of time making everything as generic as possible at the expense of time-to-market, and understandablity.

A good class or generic is hard to beat. Unfortunately the understanding of the details that make something a good class requires experience and experiments. Maybe I am talking about premature optimization, writ larger, or in design.


Same thing happens in the other direction too e.g. Rust coders trying to use Python without familiarity with dynamic scripting languages and dismisses the whole language as a footgun (despite Rust’s learning curve being harder, for instance).


This was just discussed here in another front page post “Why DRY is the most over-rated programming principle” https://news.ycombinator.com/item?id=32010699


Writing a generic function to sum an array of any numbers type is trivial in C++. There's no 4 day research period, even if you're new to templates.

    template<class T>
    T sum(T* x, size_t size)
    {
        T acc = 0;
        for(size_t i = 0; i < size; ++i)
            acc += x[i];
        return acc;
    }


Isn't acc and the xs same width here? So you risk overflow?


Yes, obviously calculating the sum of fixed-width integers may result in overflow.



> those functions of type 'a->'a

Polymorphism. But note that there is only one function of type 'a->'a for all 'a, and that's the identity function.


Not quite. This also has the type ‘a -> ‘a:

def nitpick(x): throw “foo”


that's not a function (in the pedantic sense).

technically a partial function, or you could say that every function is really of the type A -> Exception \/ A


But you should not be writing functions that make you think about the fact that Hask is not really a category.


You're right, and foo x = foo x also qualifies as 'a -> 'a. The only function that always terminates is the identity function, then.


A function in the mathematical sense is total (always terminates).


We could throw some unsafePerformIO in there and do anything else as well, but if course we would have a function in the mathematical sense anymore.


    func sum[Number](nums: openArray[Number]): Number =
      for num in nums:
        result += num
Took about a minute.


Premature abstraction?


'Premature Abstractulation'

I'm sorry. I had to do it. Ban me.


I'll allow it


Key phrase in the article:

> So, at least in this one instance,

Yeah, one example which is not even that useful. It might be cherry-picked for all I know.


The site gives assembly code of the inner loop, but isn't the problem that there is additional overhead on the calling of generic functions?


AFAIU, as compared to pure monomorphization, yes, sometimes the model imposes runtime indirection. This is particularly the case for methods. However, for simple yet common cases, such as functions taking []T, the model permits inlining into the caller.


I’m also hoping that the more monomorphization will take place over time.


Go's issues are mostly not performance related, so more monomorphization will really not solve a significant portion of them.


They aren’t performance related because idiomatic Go pushes people toward performance. Generics currently push people away from performance (of course there are exceptions). I don’t expect it will be a huge deal either way, but it would be nice not to have to choose between performance and expressiveness.


I believe Lemire's point is that the compiler is still able to inline the function despite the generic definition, so at least in this somewhat narrow case generics are a zero-cost abstraction.


Numeric types are weird in Java and .net. I'm more familiar with C#, but in C# adding two int's result an int, but adding two short's also result in an int. So addition is not polymorphic the way you would expect it to be. C# has a large number of numeric types, but below the hood it only supports arithmetic operations on three of them.

The problem is not with the implementation of generics per se, the problems is numeric types in those languages is a really leaky abstraction.


Upgraded my production service from go 1.17 to 1.18 today. A nice 30% performance boost. Love go. It keeps on giving.


I think Go's generics are reasonable, but not quite powerful enough to scale.

It's all well and good to generalize basic containers and functions (good, in fact), but I wish the language was better at inferring types.

I'm sure this is something that will improve with time, but a bit after generics were released, I tried to build a type-safe language evaluator in Go using generics and found them lacking the kind of type-narrowing necessary for fully-generic architecture.

Short write-up of my conundrum on SO, if anyone is interested: https://stackoverflow.com/questions/71955121/how-to-type-swi...

TypeScript has me spoiled. :)


I haven't looked into Go's actual implementation but isn't this kind of downcasted better suited for interfaces. Your code, as is, wouldn't translate to something like Rust which uses monomorphization; you are doing a runtime "inspection" of the types. This looks more in the domain of duck-typing than something that generics handles.


Here's a writeup exploring this very topic, and showing how Go's generics will currently give worse performance than just using an interface.

https://planetscale.com/blog/generics-can-make-your-go-code-...


Yes, I was trying to get it to infer the types for some code I was writing as well (a generic funcopt for two different types) and it couldn't - made me specify the type manually.


I think I am glad that kind of stuff doesn’t work, because that was pretty hard to read. Go is trying to keep things simple.

If you want a sophisticated type system with a language they complied to native code then use Swift or Rust.

I personally think it is good to have some choice in complexity level and that at least one language, Go, tries to carve out a niche in simplicity.

I am only barely convinced that generics was the right choice for Go.


IME complexity is mostly a feature of intention and business logic and will usually creep in all the same, it just manifests as tons and tons of for loop spaghetti, duplication and code generation. You may not have to actually learn any "hard" language features, but instead you get a large amount of moving parts that can't be abstracted away cleanly and that you need to mentally track. I've just ported a relatively complex bit of business logic from Go to Scala and while the result is definitely not as "simple" in terms of language features used, it's a lot less code in a structure that I can actually keep in my head a lot more easily than reams upon reams of low-level imperative details that are kept consistent only by the implementer's self-discipline. Forcing code to be easily digestible by limiting the language feature set doesn't seem to work too well outside of a relatively limited space of straightforward applications.


I don't seem to be able to post on his blog (400 Bad Request), so posting here instead...

Interestingly, it looks like Rust (and C via clang) both generate something like the following as their one-at-a-time main loop:

.LBB5_14: addl (%rcx), %eax addq $4, %rcx cmpq %rdx, %rcx jne .LBB5_14

It's one fewer instruction than the Go version (because it has combined the MOV and the ADD), although I don't know whether that would actually result in higher performance.

I say their "one-at-a-time" main loop, because they actually seem to generate lots of code to try to take advantage of AVX or AVX2 to add 4 or 8 values at a time if available and the array is long enough!

It might be cheating, but summing an iterator over numbers is built-in to Rust, so summing an array is just: v.iter().sum()

The fun thing with this is that it's not just generic over number types, it's also generic over any type of iterator, so I can even use it to sum over the values in a HashMap: hashmap.values().sum()


The title says Go generics are not bad, and then in less than a page this guy only compared one use case with Java's generics.

I'd expect a CS professor to be able to add a little more rigor in a blog post but I guess everyone is losing patience to read and write these days, even for the supposedly erudite.


This is his blog, he can write whatever he wants. It is not a peer reviewed paper or even arXiv.

Someone submitted and people liked it, and it ends up in HN front page.

All I am saying, please don't put pressure on random blog writers. You can criticize the content without saying "CS professor can be more rigor".


I only expected a CS professor to discover some of issues listed here:

https://go101.org/generics/888-the-status-quo-of-go-custom-g...

I mean, my cousin still in high school can do better than that...


If someone doesn't know how to address an audience you can hardly blame the audience for not responding. Academic papers are written to further academic careers. Their audience is not developers, but other people who are also not developers.

Of course there is going to be an impedance mismatch. What makes people important, effective and useful isn't what silly snobbery they use to put other people down, but whether they can effectively communicate their ideas to the audience.


A lot of folks have commented on this being a cherry-picked example, but I haven't yet seen someone link to a set of examples that show the opposite: pathologically bad performance with Go's generics.

https://planetscale.com/blog/generics-can-make-your-go-code-... is a much more in depth dive into how Go's generics work that shows some of the easy cases, like this article, but also the more general cases.


This is like the most cherry-picked example that could've been chosen. Java can't do this because the primitive types are not objects. Mentioning the performance of this is hilarious because 1. it's Daniel Lemire, he should know better and 2. the performance of pretty much every other thing that uses generics is terrible because of gcshapes being passed everywhere and 3. Java literally has a JIT to make generics fast. And bear in mind I don't even like Java generics that much, it's just that Go does even worse. (But still better than when it didn't have it at all, I guess…)


> Java literally has a JIT to make generics fast

???

By the time it gets to the JIT, java generics have long since been erased.

Java's generic classes run at the same speed as everything else, because there's no actual generics there.

(Although I'd be very surprised if this doesn't change in the near future: https://openjdk.org/jeps/8261529.)


I mean, they’re definitely not specialized and optimized before they hit the VM.


Most things first exist, then exist and are good. We've seen the start of Go generics existing, but we haven't seen them be as good or as fast as they will ever be.

Five years from now, Go will still exist. The implementation of generics in that future version of Go is likely to be much faster and better than what we currently have.

Put a different way, "Generics are slow in Go v1.18"


Someone will read a blog post about slow generics, ignore how it evolves, and tell every coworker for then next 10 years that go generics are slow


The authentic HN experience.


"Java is slow"


Not as slow as .Net !!


I'm hopeful, but the blog post doesn't say "Go generics have room to improve", it says "they're not bad". And they kind of are right now. I can see a road to how they can get better but it's not really very honest to present just this example as indicative of how they work.


Cherry picked? When write programming code examples this kind of thing would be the first thing I would try.

I would rather call it cherry-picking if one avoided covering such a case to make Java look good. Seems like a pretty obvious problem to me.

Handling basic number types is pretty bread and butter.


How can Go does it even worse that Java where generics implementation is the worse of all modern language with type erasure.

Java is way worse than Go on that aspect.


Type erasure has little to do with the power of generics at the language level. Haskell, with one of the most powerful type systems around, erases types to a greater extent than Java does (concrete types are still encoded in JVM bytecode, Haskell erases everything).


For example, by not allowing one to create a parametrized method. `func (self SomeNonGenericType) Foo[T any](param T) T` is not allowed, best one can do is `func Foo[T any](self SomeNonGenericType, param T) T`.

The reasons are explained here: https://go.googlesource.com/proposal/+/refs/heads/master/des... but the point is, you can have this in Java, but not in Go.


Why not use a free function? That is more flexible. Go isn’t Java, it is not a deadly sin to write free functions.

You look at something like the Reader and Writer interfaces and there are very few methods. The Go approach is to put more functionality in free functions such as fmt.Fprintf which allows more code reuse across multiple types.

In fact I view the heavy Java use of methods as an anti-pattern.


One surely can, and probably that's about the only option they have.

But for some `foo.Add(bar)` can be considered nicer than `AddFoo(foo, bar)` or `foopkg.Add(foo, bar)`. The semantic differences are mostly in in an aesthetic/stylistic/personal preferences realm, so it's hard to argue for or against it.


Except that in Java, the former is more common and in Go, the latter is more common. Really, the latter looks more like idiomatic Go, which makes the original complaint somewhat redundant. You might stylistically prefer the former, but then you're probably not going to enjoy using Go anyway.


Well, yes, name "Add" was a bad idea - collection management in particular tends to use free functions somewhat more - starting from the built-in `append`, hah. Although even that is not universal - consider e.g. github.com/deckarep/golang-set, which I believe is one of most popular set packages, based on number of imports. It doesn't strike me as non-idiomatic), it's all methods there and not free functions.

And either way, I'd disagree on the general principle. I'm not claiming to be well-versed in Go styles and patterns, but most well-designed libraries I've seen had exposed `foopkg.NewFoo()`, useful structs and constants (e.g. `foopkg.FooParams` or `foopkg.NoSuchFoo`) and the rest is typically interface methods.


Well, IIRC the strings package for example is pretty much all functions, and so is fmt (IIRC). You might not consider these packages “well-designed” but they are certainly idiomatic. I don’t have the inclination to go trawling through the standard library for other examples, but I will say from experience, as a former Java developer and now a Go dev, I’ve sometimes found Go‘s approach confusing because it’s more function oriented than OO.

So I guess our experiences differ, which of course is fine, but I respectfully don’t concede the point. :-)


Well, it means you can't have generic methods in an interface.


You can certainly use generic parameters in methods—the [T any] just has to be on SomeNonGenericType, even if it's not used inside the type itself https://rakyll.org/generics-facilititators


Yes, but that's completely different thing. If you move T you would change the semantics.

- func (self SomeType[T]) Foo(...) is a method for a class parametrized on T. So if you want foo.Foo(1) then it only works for foo that's SomeType[int]. You cannot then easily call foo.Foo("one") on the same instance.

- func (self SomeType) Foo[T any](...) - if that'd be a thing - would be a generic method on SomeType that works for any T. So you can foo.Foo(1) and foo.Foo("one") on the very next line and that would work.


You can instantiate it with [any] then use whatever you'd like https://go.dev/play/p/JeSuB_xYNEf, but I don't think it would work with a type constraint such as `int|string`, which would be a fair point.


Sure, but it's no longer generic, you're just using runtime types again.

This is extremely important, because it means `func (someType[T any]) foo(T x) T`, `x.foo(1)` would return `any`, not `int` like a generic function would.

In fact, if you're going to instantiate a generic type parameter with `any`, I would very much doubt you wanted a generic type in the first place.


Oooh. My bad. Thank you! I honestly thought that wasn't possible at all. Today I learned.

However, it won't work for return values, right? Because e.g. `func (some something[T]) hi(b T) T` would return `any` and not whatever it was provided (`int` or `string` respectively).

https://go.dev/play/p/IYoPUQg04sg

Also, I don't think this can work well with types narrower/fancier than `any`, e.g. I had trouble figuring out how to deal with

    type IntOrString interface {
        int | string
    }
    
    type something[T IntOrString] int
https://go.dev/play/p/acwjxdpHTJO


Type erasure is interesting. It turns out that not doing type erasure (combined with instanceOf-like things, like pattern matches on type parameters) breaks parametricity. Parametricity is a very powerful reasoning tool.

Of course, type erasure isn't the actual culprit in that case, but once you don't do it, it gets very tempting to allow pattern matching on type parameters...


I read an essay by a CS post doc taking about type erasure. He said getting to the point where you can do type erasure is golden. And then everyone makes the mistake of actually implementing type erasure. He said you consign yourself to no tooling and terrible debugging.


This.


You do realize that type erasure is the common way of dealing with (generic) types? Haskell also does type erasure, as well as basically every language outside C#.


You can have it both ways in C++ depending on your compiler flags!

And people wonder why we complain about C++..


> Java can't do this because the primitive types are not objects.

So this excuses the awful Java genetic how? "It smells like sewage in the basement because the basement is full of sewage." In addition, generics over primitive/stack/value types (including mathematical operators) is working just fine in the latest iteration of C#.


It doesn't. Picking an area where Java is weak and where Go happens to be decent is the very definition of cherry picking. Go happens to do poorly in almost every other area and Java does meh in many (erasure, unsoundness, etc.) and excellent in others (profile-guided devirtualization).


really? Is Go that bad? Write a server that does communication or emulate select in Go in other languages. Go is really effective at writing things fast at the same time that it has value types and can scale not only in I/O but also in multi-core in a way that is easier than anything I saw before. I am a mainly C++ person but I must admit that the cost/investment ratio in Go is really good for writing server-side stuff.


> Write a server that does communication or emulate select in Go in other languages.

I just did that - I've been working on a very similar thing to goduplicator (a mirroring proxy), however with an additional requirement that it must not add latency to the primary communication path, e.g must connect mirrors asynchronously and buffer data instead of waiting for a slow mirror.

I chose Rust and the resulting code has been at the same time simpler, faster and uses 3x fewer memory than the Go implementation. I have not only select!, but also things like join! or try_join! at my disposal, way simpler to use than channels and waitgroups.

Also looks like closing connections in Go is a mess. In Rust I just remove a connection value from the vector and I'm done. In Go they have to close them manually, but defer is quite useless in async code, where the connections are passed to a coroutine.

In Rust I can also interleave work concurrently on a single thread with async, avoiding synchronisation. In Go I'd have to spawn goroutines and then coordinate them through channels which would be both more complex, more costly to execute and more risky.

So IMHO Go is definitely an interesting language to write concurrent networking code in, but feels quite incomplete to me. You get products but no sum types, you get select (which is kinda sum for channels) but not join (which is an analogy of a product for channels). You get semi-automatic cleanup with defer, but it is tied to a lexical scope ignoring the fact that goroutines can outlive it.


Well, for latency you do have to deal with the GC very carefully, but the investment/ratio in Go is very good. It is just not for absolutely every use case, like everything else in your toolbox.


Investment ratio in Go is only good because of how little you have to invest. In Rust the required investment is bigger, but you get a lot more in return. Not sure about which ratio is really better.

GC is only a minor reason I chose Rust over Go for a networking related project. Despite having a GC, Go definitely feels more low-level and less structured than Rust, and leads to code that is longer and harder to reason about.

It is very similar to how a language with only a goto instruction to do control flow would be definitely simpler/smaller than a language that supports functions, loops and conditions, but the actual programs written in it would be brittle and harder to understand.


> Not sure about which ratio is really better.

That is going to depend entirely on requirements. Fast delivery, fast execution, correctness and others. No tool for everything :)


> Is Go that bad?

I'm personally no fan of Go (mostly due to the ergonomics of the standard library), but all this about generics is throwing the baby out with the bathwater. Go has a fantastic generics design.

> cost/investment ratio in Go is really good for writing server-side stuff.

And this 100%.


I’m a big fan of Go, and I’ve never been bullish on generics, but I really don’t see what is good about this design. I’m really not interested in generics with a runtime penalty even if it makes binaries smaller. I also think the constraint system needs work, and there needs to be a mechanism for generic methods. It’s possible for Go to get all of these things, but the current form leaves a lot to be desired. Rust’s trait system is best in class as far as I can tell, and I’ve criticized Rust often in the past.


Some more context would be appreciated on why you think that Java generics are awful?


The context is what was said above: Java generics can't abstract over primitive types, they only work for objects. That is awful, and it is important, because this one limitation is the only thing that makes the implementation of generics in Java possible and simple.

If generics had had to actually support ArrayList<int>, they would have required significant changes to how ArrayList is stored in memory, and to how a lot functions which work with generic types are actually compiled. Instead, Java has the easy path in the implementation: since everything only works with reference types, there's no need for multiple copies of a function to work with different types.

Of course, this means that sum<Integer>(new Integer[1000000]) is going to be orders of magnitude slower than sumInts(new int[1000000]).


It is unfortunate, but that was the only way Generics could ship at the time in a backwards compatible way. But I don’t think it is fair to call it awful, especially knowing the context. Also, when it does show up as a performance bottleneck, it is trivial to fix.


You should write a comment on his post - he will probably respond.


I could but it would probably starve him of the context of people disagreeing with me


Does he read HN?


Why do you care about forwarding negative comments for author?


Negative doesn’t mean it isn’t constructive?


I'm pretty sure he does and has commented here before.


Last time I check java didnt allow any generic array


It's been a while since i've went through any Java code. Why wouldn't the Java code sample compile?


you can't do `T summer = 0`, nor `summer += v[k]` as there is no arbitrary sum operation for Number (e.g a method `Number add(Number);` or `<T extends Number> T add(T t);`).

though this can be solved either with types, as mentioned in another comment (https://news.ycombinator.com/item?id=32029871), or simply by providing the sum operation:

    static <T extends Number> T sum(T[] v, BinaryOperator<T> adder) {
       T summer = v[0];
       for (int i = 1; i < v.length; i++) {
          summer = adder.apply(summer, v[i]);
       }
       return summer;
    }
    
    Float[] ft = { 0f, 1f, 2f, 3f };
    // which can also be written as
    // `float z = sum(ft, Float::sum);`
    float z = sum(ft, (x, y) -> x + y);


Bingo. You could use the longValue() or doubleValue() on Number to do math on if you insisted.


{}


Java primitive types are not objects :/


This article is too simple to get a full picture of Go custom generics. There are actually both good points and bad points.

To get a comprehensive understanding of the current Go custom generics, please read: https://go101.org/generics/101.html


Your statement may be correct, and your book may be excellent.

But insulting a free knowledge sharing article while posting your own site which is laden with 'buy buy buy' is rather poor taste, IMO.


You don't need to buy it. It is free for reading online, and most contents in my website are free.

"insulting"? That is a too heavy word.


I rather agree, and must apologize as English seems to be slipping from me lately. I spent some time trying to think of the right word, but ended on insult, debating whether that was too negative of a word to use. Perhaps dismissive would have been better?


It is just too simple to draw a conclusion. And IMHO, it is not very generics related.


> so far I am giving go generics an A

I guess I’ll wait until the full review then


"not bad" is a weird bar to cross for a language that:

    1. decided generics is not needed
    2. went on for a decade
    3. implemented a version of it that is not performance-sensitive
    3.5. $$ by google


>decided generics is not needed >went on for a decade

The FAQ on the official site already back in 2013 (I couldn't find an earlier snapshot on webarchive) stated they were open to adding generics but weren't sure how to properly design/implement them without overcomplicating the language, citing also other priorities.

>implemented a version of it that is not performance-sensitive

It's less performant than the ideal but it's a tradeoff to avoid exponential code bloat found in C++. In our large C++ project which extensively used templated boost.signals the code bloat from using templates alone added a whopping 200 MB of machine code which we were able to shed off by switching to our in-house library. IIRC C# uses an approach similar to Go to share generated code with some type-specific conditions at runtime (at least Mono I remember shared generic code for all reference types)


The Go authors have never said #1: they said their waiting to decide how to best do it, and they want experience with the language to determine what problem needs to be solved before solving it

#3: they’ve clearly said that they’re doing “get it right before you make it fast”.


Not sure how 3.5 follows.


And 1. is demonstrably false as has been stated a million times already.


They demonstrably decided generics are not needed for Go at release, or for the following 5+ years, I don't know how that could be seen as false. If they had thought generics are required for Go, they wouldn't have released the language before they had them.

What is often claimed instead, and is indeed false, is that they believed generics should not exist in Go. This is a position that many Go proponents take, but the designers of the language never did.


If they had launched without supporting odd numbers, nobody would be saying “they always intended to add that later” in their defense.


this article is hilarious:

1. go generics do this one thing 2. therefore they are good

Go generics are better than no generics but they're still limited, and that frustrates me.


The article really doesn't claim to briefly explore more than precisely that one thing. People just jump on the title plus some weird hate for Go.


Could you give some concrete examples? Something which requires a type of Go generics but which cannot easily be solved in another way?


proper generics would be C++ but it is called templates and could be hardcore..

https://stackoverflow.com/a/498329/729738

.Net generics do have same issues mentioned in the article


.NET 7 improves this with static abstract members in interfaces and generic math.

https://github.com/dotnet/csharplang/issues/4436 https://devblogs.microsoft.com/dotnet/dotnet-7-generic-math/


Julia gives you the same (but dynamic rather than static), but without the awful syntax.


Proper generics would be parametric polymorphism with full program type inference, i.e. an ML derivative.


aha, so Go has some kind of generics now, so since it has become a real programming language I might actually give it a spin! /s

Never understoon why Generics where subject of a holywar in go community. It's an important nad basic feature these days


C++ generic programming is super effective and efficient even if not super user-friendly til concepts landed. The good thing about Go generics is thatis has monomorphic generics for value types. That is fundamental. Java messed it up with type erasure and now they are wirh all this Valhalla project for years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: