We have a very large Go codebase here at Stream and not having generics is just not really as big of an issue as you think it is. There are plenty of work arounds if you get used to not having generics in the language. The fast compile times of Go are amazing. I was doing some Kotlin a few weeks ago and the difference is crazy. Go: Install deps, compile everything done in 5s. Doing the same in Kotlin, laptop freezes, android studio freeze, time to get a coffee :)
That being said it would be really nice to have some reusable map type structures that handle GC better than the default maps. Fingers crossed.
People from the post-1.4 era of Java aren't aware how huge code generation was before generics showed up.
And then all that momentum, all the tools, the books, the conference presentations, the code ... all of it. Pop! It died. Because code generation sucks. It is the worst solution to any problem solvable by a type system.
My favorite example of generics wonkiness was when I needed a channel to wrap an untyped channel to avoid "infecting" every call site for a utility function with untyped pointers.
I thought it was madness, but bringing it up to a very large Golang group and get "nope channels are cheap! That's fine! There's repetition but it's easy to follow"
I've said before, my personal take is use Go, get a feel for the Go mentality, then take it with you to another language.
Go is just too stuck between low level and high level for me personally. I'd rather go under with Rust or over with Kotlin or C#
Most of the casting in Java 1.4 was from collections of Object. In Go the collections are typed, so the casting is confined to some very specialized pieces of code.
The only difference between Go and Java 1.4 in terms of collections is that Go has a generic map type, which Java didn't. Java 1.4 still had generic arrays, just like Go. In fact, it was a little easier to program with generic arrays in Java vs Go (but also less safe) because in Java a subtype[] can be passed to a function which takes a supertype[] (arrays are covariant).
I miss Java 1.4. It was small and concise. Java 5 added so much that none knows all of it. Just look at the length of Java Generics FAQ. It's hilarious.
Java before 5 wasn't a language, it was a library and number of jvm implementations. It wasn't until Java 5 that there was a memory model spec that defined how stuff was supposed to work.
Yeah that's ok until you have 20 million lines of generic ridden crapola pumped out by the lowest bidder. That's the hell I spent a good chunk of the last few years untangling on the C# front. Let's model this correctly! Oh no someone said fuck it, lets just use a bunch of generic data types!
Several thousand out of bounds, missing keys, null reference exceptions, hash collisions and the hair starts to get thin on top. I'm not even sure I'm happy with it for abstract data types.
Well the generic programming model tends to favour using light weight abstract data structures instead of well defined types. Those abstractions are by nature leaky so many internal concerns leak out of abstraction boundaries into the caller and give them one hell of a bad time.
This is not a problem with generics, but with C#'s lack of discriminated unions and/or tiny-types. Except what on earth are you doing with a dictionary whose keys are lists of dictionaries? I am quite sure someone has not modelled their domain correctly there. That's not something you can blame on the existence of generics - I shudder to imagine how much worse it could have been without generics!
I was exaggerating there I admit, mostly because I can't post some of the hell I've seen without breaking contracts. The worst example I've seen was a completely generic data type specified abstract syntax tree. I spent a couple of weeks rewriting that using concrete types and managed to find and fix tens of trivial bugs caused entirely by the design.
The point is really that it's hard to reason about such things and define if they are appropriate or not for a lot of people. It's a lot of rope to hang yourself with.
The sufficiently stupid-but-hard-working programmer can write crap code in any language. The actually-useful question is whether the language gives competent programmers enough rope to build whatever they're trying to build.
Agreed. I'll be banning generics from any code I have control over unless there's a very good reason for it. I saw too much of this crap in C#, and ran from it screaming.
> isn't generic Sets easily implemented with map being already generic?
Since Go has neither generic functions nor generic typedefs you can't implement a Set with a generic key type on top of map, you have to reimplement all the set operations for each key type you use.
I think map[T]bool is already a pretty good set; the only things you can do with sets are insertion, deletion, iteration and checking for existence and they're all well-supported.
Of course, if you need a concurrent set you're right back in type system hell.
You can’t write intersection, union, difference, subset (contains all), or powerset as reusable functions for any element type. The idiomatic thing for now is to rewrite them as loops over and over, but that’s error prone, hard to read, and not a good use of time.
In addition to that, map[T]bool only works if T is one of the few types that Go can check for equality automatically. You can't define a custom equality (+hash) function for your type and use it with the built-in map.
One challenge there is that identity is only supported for some built-in types — only primitives, and structs of primitives; no pointers, slices, maps.
If you want a set of some complex kind of value that contains non-map-indexable types like slices and pointers, then you have build an indirection around it.
A good set implementation needs to support a comparison operation. I really wish this existed for Go maps, too.
* each key now has 3 possible states (true, false, and unset) rather than two
* a bool takes 1 byte to store (which may get more problematic due to alignment, I've never checked what the memory layout of go's map is so I don't know how much of a concern it is there)
An empty struct fixes these issues: a key being present means the item is in the set, and an empty struct is zero-sized.
edit: apparently go maps are laid out as buckets of 8 entries with all the keys followed by all the values, so there's no waste due to padding at least.
As someone who finds "indicating intent in the code" an important thing, I must admit I find this concert slightly horrifying. A Map and a Set are two different things and which one you use conveys some intent as to what you mean by your code. I get that it works, but it would still make me unhappy to do.
Ideally, if the language/standard library provides maps but not sets, and you wanted to use the idiomatic set = map of type -> bool approach, you'd create a wrapper so that intent is preserved but users don't have to know about the backing mechanism. Of course, it's obnoxious if everyone has to do this themselves and the language lacks generics so you have to write this once for each potential type.
Shallow wrappers that don't wrap much (now) but convey intent better are valuable if you buy into the idea of modularity and encapsulation in general. Some reasons (probably more out there):
1. The now-provided interface can more clearly express what the code is intending to do (better names for the operations you're providing than the underlying system has, remove_from_end to pop or dequeue)
2. Hide methods of interacting with the underlying data structures that you don't want people to use (use a C++ vector as a stack, but don't want random access)
3. You can replace the underlying mechanisms at will without impacting the users
If you just wrap a vector in your own vector class and otherwise provide the same operations (or a limited set of operations but for no good reason to restrict usage), sure, that's moronic. But if you wrap a vector class in a "BigNumber" class and provide operations like add, subtract, mod, etc. then value has been added. Same thing with the idea of wrapping a map in a set interface.
Wrapping to hide is valuable, but wrapping has a cost which is generally underrated. Every wrapper is a thing itself which must also be understood when trying to understand how things work. And every wrapper is a division between blocks of code, meaning if you have changes which impact multiple layers of wrap, its harder to determine what to change, and to maintain the understandability of each layer.
For this reason im an advocate of lazy wrapping. Create an abstraction at the last moment, when its painfully obvious what benefit it will provide, when you can see how it ties together disparate pre-existing code blocks, and when you have the highest confidence that it will stick and not need to be unwrapped next week by the senior dev.
> Every wrapper is a thing itself which must also be understood when trying to understand how things work.
I'd offer a different view. Wrapping/abstracting like this should reduce the amount of things a user of the abstraction needs to know. I don't care how Java's BigInteger class works under the hood, only that it does what I need it to do. If I did have to know how it worked to use it, this suggests a failure on the part of whoever created it.
It does increase what the maintainer of the underlying system (including the abstraction) needs to know, but if done in a sane manner this should not be a burden. So we're making a tradeoff. The user gets something simpler, the underlying system maintainer gets something a bit more complex. Or the user gets something more complex and with more boilerplate but the underlying system maintainer gets something simpler (though will be pestered with, "Why don't you offer a generic set yet?" asked for years to come).
> meaning if you have changes which impact multiple layers of wrap, its harder to determine what to change, and to maintain the understandability of each layer.
When this happens, in my experience, it has meant one or more of:
1. The choice of how to wrap/abstract was poorly chosen
2. The choice was made too early (before the problem was properly understood)
3. A major change was made that would've been hard to identify/plan for earlier
I ignore (3) when writing code beyond what's reasonable to plan for. (1) and (2) though mean I mostly agree with this:
> Create an abstraction at the last moment
But rephrased, borrowing the phrase I first saw in some Lean Software book, "last responsible moment." It's not sensible, for instance, to use a map to booleans as a set throughout the project's life and only wrap it at the last moment. If you know it's going to be a set, wrap it early because this offers clarity to your code and reduces boilerplate/noise. If you know you need a stack, and have a vector available, wrap it and hide the random access option. If it later turns out that you also want random access, you can offer it, but if it's been available from the start then users will have abused that and you won't be able to rein it in later (without a lot of effort and heartache).
I'm not sure how I feel about the `set` wrapper. I suppose its nice to hide some of the detail of how the set works. On the other hand, it is confidence inspiring to be told "this is just a map, its really that simple" as a user. I have a similar conflict about string alias types like `type MyId string`.
But it's not a Map, it's a Set. If I see an API return a Map, I expect it's returning a relationship of keys to values, because that's what a Map is used for. If I see it returning a Set, I expect it's return a collection of unique values, because that's what a Set is used for.
I mean, you could support only List objects in the language and call it a day because they can be used as anything else. Or only lambdas, for the same reason. At the end of the day, though, having structures for the various ways you want to treat data is helpful. Using the right structure to hold data reduces cognitive load.
> A Map [with value type = void/unit/() [0]] and a Set are two different things
Not to defend Go or anything, but that's like saying:
| A Array [with element type = byte/char/u8] and a String are different things
It might be useful to call them different names (of course that would require Go to support generic typedefs for `type Set k = Map k Void`), but they're still fundamentally the same thing.
0: which, to be fair, is not the same thing as a map with bool values.
A map[T]bool has 3 states for every key; absent, true, and false. A map[t]struct{} has 2 states for every key; absent, and present.
People new to Go tend to pick map[T]bool or map[T]int because they're used to using bools and ints throughout their code, but struct{} is the correct value type for sets. (That is not to say that a counting set, map[T]int is useless, however. If you need that, use that!)
Hehe. It definitely was.. I think this is also still somewhere in "Efficient Go". However this seems to have changed in recent years. I was surprised by this too and personally I still prefer the bool even though it uses a bit more memory.
People argue there are 3 states but it is meaningless in my opinion because you can just ask exists := someMap[someKey] without checking for existence as you do with real maps. Here false is equivalent to not existent.
There is a big difference here. The draft is about adding generics to the Go stdlib, not about if it's possible. Is it even possible? Yeah, there are some implementation (e.g. https://github.com/cheekybits/genny). So Go is not "far behind" C, which also does not have generics (or did I miss something in C11/C17?).
It seems to me like generics are extremely important for "library code", and not super important for "application code" (and in fact they can sometimes create more confusion than they're worth in the latter context). Go also seems like a language that thrives in smaller-scale, application-focused contexts (microservices being the obvious example).
So in this light, and with the basic generic data structures supplied by the standard library, it seems to make sense for "user-level" Go code to generally be better-off without generics
Of course the line between "library" and "application" code isn't well-defined (especially if you consider libraries outside of the standard one), which is probably where most of the pain-points are coming in
> It seems to me like generics are extremely important for "library code", and not super important for "application code"
I find that it really depends a lot on the language you're working in, and how well it does generics.
In Java, I don't use generics much beyond collections, streams, stuff like that. Whenever I try, I tend to trip over its relatively limited implementation of the concept.
In a language like F#, on the other hand, generics are the cornerstone of my business domain modeling. They provide a way to map everything out in a way that is much more concise, readable, type-safe, and maintainable than I find to be possible in many other languages.
I have yet to kick higher-kinded polymorphism's tires in a good context, but I can see where a good implementation of it would move things even further in that direction.
(edit: Disclaimer: This isn't meant to be a statement on Go or the advisability of this proposal. Go isn't really meant for the kinds of applications where I've seen real benefit from generics.)
Whether you find yourself using them and whether they're actually necessary are two different things :)
I've gotten use out of generics in "application code", but I've also been bewildered by overly-complex generics-within-generics-within-generics written by other people in application code. It's hard to be conclusive, but I wouldn't be surprised if they've done more harm than good across application contexts.
To me, that's a shining example of the problems I've run myself into when trying to squeeze much power out of Java-style generics. I never seem to encounter similar problems in F#. Scala, it depends on how successful I am at not losing a boot in the mud.
Generic programming was born in a language whose other pioneering features were algebraic data types and an HM type system. I've never really seen a first-rate example of one that didn't come paired with at least passable examples of the others.
It's a real pain in the ass not having generics any time you're working with algorithms and data structures. Linked lists? Graphs? Trees? Go is generally quite nice to work with but it implementing these basic structures again and again with different underlying data types makes me feel like I'm writing Java. Which is ironic because, you know, Java has generics.
I think the idea is that these fundamentals could/should be supplied by the standard library
Ironically, despite all their differences, Rust actually has a similar situation: it's really hard to write the fundamental data-structures in Rust, so they've put a focus on having really good standard-library implementations and people are generally content using those (in Rust's case it's because the borrow-checker makes pointer twiddling hard, but the outcome is similar)
They kind of did this with maps and slices except that they baked them right into the language instead of the standard library. Like, map is a keyword. The standard library doesn't have many data structures at all because, well, without generics they're not very useful. There's a few things like a linked list and a thread-safe map that accept interface{} types but then you're basically throwing the type system out the window.
> I think the idea is that these fundamentals could/should be supplied by the standard library
There is basically no limit to the number of data structures possible, nor to the possible implementation details of most of them, all of which can be relevant to the situation at hand.
The stdlib can hardly be expected to implement them all.
If they're very specific to the situation at hand, they're much less likely to need to be generic. The GP explicitly mentioned "Linked Lists" and "Trees". You don't need to be writing your own linked-list or (basic) tree from scratch.
> The GP explicitly mentioned "Linked Lists" and "Trees".
And graphs.
> You don't need to be writing your own linked-list or (basic) tree from scratch.
Trees are rarely useful in and of themselves, what's useful is the data structures you're building out of them. And that, in turns, informs a significant number of properties of the tree you'll be using as well as the operations to perform. The stdlib providing "a basic tree" and essentially telling users to get bent would be worse than useless, it would be actively insulting.
Even for the humble "linked list" there are half a dozen possibilities: singly linked? Immutable? Doubly-linked? Circular? Array-of-nodes?
> I think the idea is that these fundamentals could/should be supplied by the standard library
Data structures generally need to be parameterized on the contained types of you don't want to waste the effort of even having a static type system, which makes it impossible to do this right without generics
Though again, Go as a whole seems ill-suited for scaling to larger projects because of lots of other limitations on its type system, reliance on conventions, implicit-defaults, etc. Which makes it well-suited to (and often used for) things like microservices, where each actual codebase is smallish. Codebases like these will tend towards having less "library-like" code anyway, which means they don't need generics as badly. There's synergy here in the language design.
So I guess what I'm saying is: leaving out generics seems like the more "Go-like" direction, will dovetail better with its overall philosophy, etc, and isn't without advantages. But it would also mean kneecapping the language when it comes to certain use-cases that it's never going to be great for anyway. It's the classic "opinionated" vs "everything for everybody" dichotomy
Why do you say Go has trouble scaling to large code bases? Is that something you'd expect, or something borne out by the evidence? And if so, what is the evidence?
FWIW I would take fasterthanli.me with a grain of salt. The guy is a serial Go hater. His points stand on their own, but I don't think he appreciates Go's benefits. I think "A Philosophy of Software Design," Rob Pike's talks, or Russ Cox's blog posts are a good place to look if you want to understand what is valuable about Go and the reason to believe it would actually scale very well to large codebases.
Thanks for the references, I'll look at what the other side has to say
I am aware that fasterthanli.me can be a bit, shall we say... opinionated. Though as you say, his points do stand on their own. I can see the things he points out about the design philosophy of Go's language features and standard library and draw parallels to languages and libraries that I've used firsthand, and had firsthand frustrating experiences with when it came to navigating their magical behavior and lack of enforcement of contracts. But I'll keep an open-mind
Certainly. As a counter-example to the pain of implicit functionality, take the UNIX file API. `open(1), write(1), close(2)...` represent hundreds of thousands of lines of code, spanning network devices, local file storage, integrity checking, and who knows what else, and all of it is hidden. It is precisely the mountainous heap of implicit behavior that gives these APIs value.
That being said, APIs with implicit function that are broken or surprising are painful, but I take this not as an indictment of implicit function, but as an indictment of buggy APIs. I think state and hidden functionality is the essential ingredient of highly useful code.
When I worked at $big_company, we used a lot of code generation and reflection to work around the lack of Go generics in things like API interfaces to other services and test mocks. This was far from ideal, because it significantly increased compile times, and some things stopped being typesafe or had unfortunate type-related bugs.
If we're dealing in anecdata, mine is that "Go compiles fast!" is true right up until something in your dependency tree hauls in kubernetes repos, perhaps multiple times. Thanks the "first principles" design of go mod, that's becoming increasingly unavoidable.
Kotlin does compile much slower than I would like, but at least I only haul in one version of libraries and 0% of it is generated code. Java is basically instant for me on the same codebases.
The TypeScript team should consider rewriting the type checker in Rust or Go IMO. It's free, they can do what they want, but the performance is really not good (compared to what it could be) and affects a lot of people.
It doubles down on Go's assumption that git repository === a proper package/module system. It mixes up URLs and URNs.
If your git repositories aren't tagged just so, then go mod throws its hands up and simply invents a whacky snapshot version. Because it can't itself properly determine "earlier version" from "later version" on that snapshot, you often wind up with multiple snapshots from the same repo, not infrequently transmitted through other dependencies.
This is just jolly good fun when it turns out that your dependencies are pulling in incompatible versions of things. Since the official Kubernetes policy for downstream consumers is "we don't care about downstream consumers", it happens more quickly than one would expect.
As much as I have hated playing whack-a-mole with Maven or Bundler, I hate even more playing whack-an-adamantium-and-invisible-with-xray-eyes-mole against go mod.
_Technically_ you're right. Go considers v0/v1 to be a distinct module from v2. Most people would consider this to be the same module, but Go doesn't. If you want to know more, you can go read the manifesto released by the maintainers about how this is "the best thing ever".
If you never tag v1, you'll never have to deal with it.
My code before a transitive dependency pulled in the k/k universe took milliseconds to compile. Afterwards it took about 10 seconds to compile. Laboriously compiling thousands and thousands and thousands of lines of nearly-identical code turns out to be much slower. There are no clever shortcuts for a compiler that cannot deduce a higher intent.
That's exactly what generics are supposed to solve! ;)
No, but seriously, I'll be interested to see if they can pull off maintaining the compiler performance while adding support for this new feature. I've had to hand-write a lot of code that I'm excited about a generics solution automating, but automation can have a price, you're exactly right. I've worked on a C++ codebase before that couldn't physically compile on my machine because it blew stack on template instantiation recursions (issue never noticed because the original developer had a better machine).
In our large production project we have a very simple Go arch that enables 15+ microservices. Im not sure I see the value of generics either other than complicating a rather complicated distributed architecture with abstracted implementation.
It's beautiful that we still import 6yo packages that are clear, concise, and work as needed. The package landscape with generics doesnt seem great.
I don't think generics will suddenly make your, or anyone else's codebases more complicated. It might simplify code that's currently doing type assertions
I'm pretty convinced they could have added some additional structure libraries to the standard library and called it good. Some map stuff / set stuff etc.
For many its really not a big deal not having generics, and makes code SO much easier to follow (and compile / debug etc).
This is not the first time I argue this case, but I would go a step further and say that not having generics is feature of Go.
There are plenty of languages out there with Generics. I use several of them. I use Go when that suits me, and it's typically for cases where high readability trumps doing a lot of magic with generics. I think only once or twice have I thought to myself, "this would have been better with Generics".
Generics aren't magic, and they aren't Turing complete, like C++ templates. They're just a way to avoid copy-pasting code.
Having `Set<Foo>` and `Set<Bar>` is far more readable than `class FooSet` and `class BarSet`, where the code is exactly the same aside from a search-and-replace.
You also run into issues where someone finds a bug in `FooSet`, but doesn't know `BarSet` exists, and forgets to patch both. Now, you have two divergent copy-paste classes.
Generics solve a bunch of real-world problems, in a very simple manner.
Sorry, when I said new data structures, I meant containers like maps and list, which I very rarely get to create day to day.
I can see it for your transformations, but I have seldom seen cases where generics would really help (usually we're talking about comparing complex structure types that will need custom code anyway).
"Theorems for free"! By which I mean, when I'm coding something and it could be universally quantified on the type, then it's better to do so. That way, it's impossible to phrase certain errors. For example, a function `Set<a> -> Set<a>` must produce a subset of the input set; that's guaranteed by the types. I can't accidentally include the value 0 in my set, because 0 isn't of that generic type. The generic forces you to think in terms of the structure you're manipulating, rather than in terms of its contents.
You can't use interfaces to prevent inserting values of the wrong types into a set. When you view the purpose of types as preventing bugs, that seems like a giant missing feature.
“Can you give a real world example that couldn't be solved with interfaces?”
Of course not. Go is Turing complete, and generics do not make it Turing-completer (whatever that may mean)
The question isn’t about what can or cannot be done, but about expressiveness, ease of understanding for humans versus language size (even if you have plenty of disk and RAM, That correlates with buggyness of the compiler) and compilation speed.
I think you're looking at it wrong. You can absolutely solve it with interfaces, the problem is those interface methods are identical, so it's duplicative.
> I think you're looking at it wrong. You can absolutely solve it with interfaces
There are lots of generics use case you either can't solve at all with interfaces, or you have to contort every which way and usually lose something in the process (type-safety, performances, readability, …).
> Can you give a real world example that couldn't be solved with interfaces?
There are none, you can always work around them, with the only downside is that you'll move some potential compile-time errors to runtime errors.
But that's the wrong conversation to be having; we could also say "why do we need floats? Everything can be solved with ints too", which is true, but also a lot of work and a poor trade-off. Similar arguments exist for many language features.
So it's a question of trade-offs: how much time will this save people? Will it reduce faults? And what are the costs of adding this? And how do they balance?
The introduction of generics would not change your workflow though. You could still happily "not use it" and keep matters readable. Others who wanted it, would use it.
That ship already sailed with widespread misuse of interfaces. Consider:
package foo
type T interface { func Bar() }
func New() T {
return &someotherpackage.ImplPickedAtRuntime{...}
}
Now whenever you see:
x := foo.New()
x.Bar()
You have no idea where to read the code for Bar. For maximum fun, ImplPickedAtRuntime should then contain members that were allocated in the same way. What should be a simple M-. then eats up your entire afternoon.
If others people code uses it, then those other people deemed it useful.
So the argument for not having them now becames either:
(a) they rather not have it available, because you personally don't find it useful
(b) those using generic don't know what they're doing, and only people not using generic are smart, so it's better to not have them to prevent the clueless from being able to use them
Like I said, I use (and like) several languages that have Generics, and when I need to do something where it makes sense, I can reach for those. For me there was an advantage in having a language where it wasn't an option.
Your (b) scenario is quite a strawman. I have seen plenty of good code using Generics, but sure, there is subset of code written using Generics that is not good, and I think it becomes easier to obfuscate code and make it hard to read if you have Generics, that might just by my bias, and I think I'm tainted from C++ and hopefully it will never become as bad as what you can encounter there.
I'm not trying to make out that Generics have no place in Computer Science. I was trying to make the case for it being nice that there was a language that didn't have it, and was building on the grandparent saying that he didn't miss it that often, which mirrors my experience with Go.
This is a silly strawman. Developers often write hard to read code, and even if generics are useful to the writer, doesn't mean they are useful to the reader. Many developers do not consider the reader, or if they do, not very in-depth. You can also make the argument that generic code is uniformly harder to read than specific code.
> Doing the same in Kotlin, laptop freezes, android studio freeze, time to get a coffee
I work in Kotlin every day and my laptop never freezes.
I'm sure compile times are shorter in Go (after all, that's one of Go's main selling points). But compiling Kotlin code doesn't seem to be a huge bottleneck in my experience (and I have worked in some other languages that had truly atrocious compile times cough Swift cough). In fact, it usually takes longer for Spring to boot than for the Kotlin code to compile.
Measuring against Kotlin or other languages in this category like Scala is not the right thing to do. Also Kotlin is known for its slow compiler though JetBrains promised that the situation will improve. I would use Scala or Kotlin for everything data but when it comes to low-level networking/infrastructure I wouldn't even touch anything else but Go. Go is in the perfect sweet spot for this task. The Goldilocks Zone of network programming.
It's such a truly terrible comparison because they didn't compare Kotlin, they compared Android!
The insane amount of time of overhead an Android project has over a "plain Kotlin" from resource packing to dex stuff to desugaring to ProGuard, it made me question if they're speaking in good faith for the rest of the comment...
My ktor projects on the other hand compile incredibly quickly, and with hot reloading it's even more seamless to iterate
The biggest challenge is avoiding the tricks from languages that allow you to use the language to paper over questionable design choices. Go requires that you get the design right up front and provides few escape hatches to save you if you mess up. Which is a good thing, but makes the language (not just the syntax) difficult to learn compared to others.
Can you give examples of specific language features of go that prevent you from making questionable design choices, that require you to "get the design right up front"?
When I hear that, the first things that come to mind are things like haskell's IO monad, which forces you to model IO better than go or most other languages, or haskell's other state monads which similarly force you to model state more explicitly.
I think of rust's lifetime and ownership system, which forces you to correctly model the ownership of types and prevents quite a few bad design patterns (which I see constantly in go btw; the number of times I've seen races due to multiple goroutines writing to a struct is large, the number of times I've seen incorrect synchronization that rust would have prevented is large).
I can't think of anything in go that pushes you towards designing your code well in go, especially when compared to languages with more complete type-systems.
> Can you give examples of specific language features of go that prevent you from making questionable design choices, that require you to "get the design right up front"?
No, obviously. I said that Go doesn't give you an escape hatch if you screw up. It does nothing to protect you from screwing up. I specifically said that the challenge was in learning how to not screw up as the language doesn't help you deal with or avoid design mistakes.
Aaah, I read "Go requires that you get the design right up front" totally wrong, as in "go prevents you from getting the design wrong", not "go lets you get the design wrong, and then doesn't help you".
I still don't get your contrast to other languages though. Are there specific language features other languages have that let you paper over crappy designs?
Generics make working with reactive APIs much easier, which I think fits with a large chunk of Go's target audience (b/e networked services) quite closely.
Writing APIs to deal with futures, etc, is much easier when you can chain parameterised functions together.
Two more levels of blogs down, the actual proposal.[1] Definition:
// Print has a type parameter T and has a single (non-type)
// parameter s which is a slice of that type parameter.
func Print[T any](s []T) { ... }
Call:
Print[int]([]int{1, 2, 3})
Above, "any" is really just a synonym for "interface{}". You can have more restrictive type constraints on parameterized types by specifying other Go interfaces. This is vaguely similar to how Rust does it, and quite different from the C++ approach.
"This design does not support template metaprogramming or any other form of compile time programming."
My understanding from the “Featherweight Go” paper https://arxiv.org/abs/2005.11710 is that generic types will not simply be a synonym for interface{} because the compiler will be able to monomorphize them - they do not require dynamic dispatch like interfaces.
> My understanding from the “Featherweight Go” paper https://arxiv.org/abs/2005.11710 is that generic types will not simply be a synonym for interface{}
The `any` constraint is a synonym for the `interface{}` constraint.
I rarely find myself frustrated with the lack of generics in Go and am so glad to never deal with the kind of over-engineered generic madness that is so common in Java, except...
When dealing with collections. It's maddening to have to keep duplicating basic functions like getting the keys from a map, or checking if a slice contains a given item.
Aren't collections 30% - 40% of code? (We seldom deal with just one thing.)
That's why I feel generics are so important.
You can build complicated messes with any programming paradigm. It's a matter of discipline to use the tool correctly. Don't hate on generics, but rather the unskilled use of them (which I frankly see far less than abuse of other patterns/paradigms/language features).
The biggest negative with generics is compile time, but the clarity and conciseness of generics is worth it for me.
I've been heavy into Go the past year. I love the simple interfaces they've built over some rather complicated things (concurrency, cross-compilation, networking, etc), which really do tend to just work.
I fear that Go will eventually turn into something where we look back and realize we've lost something important by gaining a lot of less importants. The impulse to change things is just too strong these days. C89 has done just fine unchanged for 30 years.
All I want is a C+=1 I can rely on for the next 30 years.
With all the CVEs we see every month due to what can be only called design flaws in C, I have a hard time saying that C did fine for last 30 years.
For C+=1, I'd look at Zig; it unfortunately lacks the excellent corporate support that Go enjoys.
To me, Go looks much like early Java, only with a much better concurrency and saner "OOP". If anything, generics made Java better in many ways, without sacrificing performance or usability. It took 7 years for Java; it's going to take closer to 10 years for Go, bur better late than never.
> With all the CVEs we see every month due to what can be only called design flaws in C, I have a hard time saying that C did fine for last 30 years.
We can argue endlessly about which metrics of success are most important.
Zig looks cool. I've seen it mentioned a few times over the years. Looks like manual memory management is the default, yeah? That's important IMO if you're really trying to replace C. Rust is great but I just can't iterate fast enough (yet). Does Zig offer an optional GC?
EDIT: Also I never said Go is a good C replacement. But I'm finding it useful for some of those tasks and suspect it will be less useful for them down the road.
I think they're talking about the simplicity of C.
But yeah, I like to think of Go as a Java for the new millennium.
We're a Java shop and lots of people hate some of the newer changes to the language. And how OOP focused it is. I think Go would be a better fit because it seems to match the philosophy of our team more. But don't really think it's worth the switch for us.
I've used Java and Go. I find Go a far superior experience. Part of that is the standard library which seems to strike a perfect balance providing what you need but not too much.
I also think a lot of it has to do with the culture of the languages. Kotlin is a pretty nice language, but using it for Android still makes me want to hit my computer with a hammer because the over-abstraction of the Java ecosystem is maddening.
That's definitely the path they're headed towards. @FunctionalInterface from Java 8 and type-inference from Java 11 and instanceof destructuring from Java 14 are the big changes.
"Proper" modern Java is functional, at least at the surface level.
The big issue is that the ecosystem is kind of stuck; it's freaking 2021 and we're still targeting Java 8.
Also the type system could be better, null references are everywhere even though there is Optional<T> (I wish it had optional strict null checks a la Typescript, but I'm not sure how realistic that is), and the fact that you can't have List<int> because generics won't work on primitives is just boneheaded.
But yeah, the language isn't the "Dog extends Animal" bullshit they teach you in school, at least not anymore.
It doesn't matter. Eventually you will need to use some library that's from the old days. You can't escape the abstraction. Not everything has been or can be updated to modern standards so just like C++ eventually every bit of the language ecosystem will come back to haunt you.
Go doesn't suffer from this yet because it's too new and the philosophy is different which keeps some of it at bay, but it's only a matter of time. Entropy always wins.
> The impulse to change things is just too strong these days.
FWIW, I don't think this impulse is there with the Go team. Go progresses quite slowly, from what I've seen as a user over the past 18 months. Generics have been in discussion for a long time with multiple implementations and no real rush to "just ship it".
I went from writing almost 100% Go to an environment where I write 60/40 Rust/Go.
The worst thing we can have on any HN thread is a debate about the virtues of Rust vs. Go. They are different languages with somewhat different long-term goals and very definitely different short-term goals, and these threads are never interesting in anything but a sort of sporting event spectator way.
I will just say that while there are a lot of things I like more about Rust than about Go, generics in Rust come at a cognitive cost. They're infectious; they don't get used the way people say they need them for Go ("I need to be able to sort arbitrary things and have sets of arbitrary types"); they're as fundamental to Rust as interfaces are to Go. It adds a lot of additional indirection.
Correct. As someone not doing Rust full time I do get tripped up on Rust's pervasive generics. Things like, "should I take a T argument here, or a U: Into<T>, or U: AsRef<T>" etc. Just seems like there's a lot of open world choices in the language which make it hard to get going quickly as a beginner
I like Go, but I too in the meantime have dipped my toes into Rust and it's just so much better without being that much more complex. The learning curve is real but quite a bit overstated I think.
There's this common belief that "rust is too hard", which used to be actually true, but the docs and the language itself came a long way since those times.
I'd say: If you can code in C#/TS (or anything like) + go, then it only depends if you have a free weekend.
Rust has quite a few concepts you won't find in (some of) those languages like borrowing, lifetimes, traits, monomorphization, macros, type semantics around concurrency, (partial) expression based syntax, pattern matching and match guards...
However you can litter your code with unwrap and clone to reach the finish line quickly, but then you lose the two main value props of the language and likely lose performance and runtime consistency over other languages.
New concepts, yes, but I think anyone who has spent any significant time programming in C++ will immediately recognize the problems that they are solving and how the solution works. That significantly eases the learning curve in my opinion.
Indeed I am coming from Python (and some Go), and I'm coming to Rust because I want a language that makes me think about these things. But those who do not want to be bothered are likely to be turned off.
Okay, fair. For myself, I'm coming from extensive experience in both Python and C++, and although I'm admittedly still in the honeymoon phase, my assessment so far is that Rust is an excellent union of the two.
Basically I get the high level abstractions and package management that I expect from Python, while inheriting a set of tools that help finally realize some of the high performance, zero-copy idealism of C++ (slices, lifetimes).
C++ programmers are used to think in terms of object lifetimes, were they should be allocated, and how to signal ownership and object consumption over the API.
That's why is better to learn Rust coming from a C++ background.
(And in my experience this mental model can also be of value even in languages were memory management is handled by the language runtime)
I mean the biggest change for most people with rust is really the borrow checker or aka the lack of a (automatic) garbage collector. I think it takes much longer to get used to it if one hasn't worked with say C or C++ before.
It really depends on what you are going to do with Rust. Sometimes it can be really tricky, but for simple tasks where we used to use scripting languages Rust can be similarly simple.
I'm taking my second crack at learning Rust and I have to say I've made a lot more progress this second attempt. It could be just giving things time to stew in my head, but I really think it's because I'm using rust-analyzer with vscode and before I was using RLS. Rust-analyzer is a much richer experience and its informative error messages and suggestions lessens the learning-curve drastically.
In my experience a combination of the rust book followed by the too many linked lists book was enough to give a pretty good idea of Rc, RefCell etc and how they can be used.
With Rust, sometimes I'm just breezing along wondering why people say Rust is hard. Then I try to deserialize some JSON into a type that borrows a reference...
Also, Rust struck me as quite a lot more challenging before non-lexical lifetimes and rust-analyzer, so it's possible that you're responding to outdated criticism.
I would say that in the case of Rust, generics are on the list of those "few other bonuses". There are other things in Rust that are more attractive and innovative. Rust is as complex as C++ (which is not a compliment) but saner and safer, without undefined behavior. Though, at the rate they add new features, I can see it becoming rapidly a kitchen sink.
Rust is a reasonably complex language, but—having programmed professionally in both—I don't think it's nearly as complex as C++. Rust features for the most part are orthogonal to each other, whereas C++ features tend to have weird interactions with each other that are really hard to track and understand. (The one place where Rust starts to get messier in terms of feature interactions is async Rust, but luckily you can program Rust just fine while ignoring the async features.)
Consider initialization: C++ has dozens of different ways of initializing objects that in turn interact in complicated ways with move semantics and references and so forth. Rust's initialization story by contrast is straightforward: you just make the thing you want to make using struct or enum literals and maybe wrap those initializers in functions if you want.
I agree I just have to assume people comparing Rust to c++ never really used c++ professionally for any length of time. Rust is incredible simple coming from c++ there is normally only 2 or 3 ways to do something vs 5 + 4 more via templates. It takes longer to learn the symbols for lifetimes then to get a grasp of what you need to do to keep the borrow checker happy.
It's very slow for the initial revision, as it has to compile all its dependencies. From there, if you add in full async/await support with a web server framework, you're looking at ~6sec iteration time. If you bring in LLVM's LLD instead of GNU or MSVC, you can bring that down to ~3-4sec. They're working on adding support for their own LLVM competitor, Cranelift, that should further reduce those times. It's only intended for debug/development builds, though, so you'll still need longer compile times for release.
While the compile times are a bit slower, one of the advantages of the strict type system is you don't need to compile to an executable quite as often.
Most of the time your IDE's messages (or "cargo check" output) is sufficient to find all the things the compiler will complain about.
I usually find that once I've fixed all of those, I compile it once and it just works.
I look forward to generics in Go. Yes, it's possible to do it with reflection, interfaces and interface{}, but it's not typesafe, it's not fast, and it's prone to code bloat.
I'm a fairly late-comer to generics, I never programmed seriously in C++, I avoided generics in Java initially, and I wrote a lot of code in less statically-typed languages. Ever since the first serious talk of generics in Go 2.0 I've endeavored to educate myself and I now am very strongly in favor of them.
generics in go will be a great addition, also it is important to realize generics in java and go are different such that go uses structural typing vs nominal
I'm really mixed about this. As a developer I would love having generics in Go. I can think of a few places in my code I can greatly simplify if they were implemented now.
However, as someone who reads other people's Go code, I'm not a huge fan. One of the greatest things about Go is that a developer can usually one-shot read and understand almost anybody's code because there's a simplicity "forcing-function" applied to everything. To lose that would be a shame.
Exactly. I found myself so frustrated when learning Go and digging into the "gotchas" of goroutines. There is so much non-obvious complexity that could be completely avoided by providing generics so that someone else can develop a package to handle the issues for you.
I agree totally with this sentiment. I love reading other people's Go code but not always writing it, which is basically the opposite of virtually every other programming language I've used.
It's too bad this is targeting end of year, I have so many applications for this--test assertions, http controllers, SQL--this will remove a lot of duplicate code. I also think it will expand use cases for Go, especially in the UX area where you have to implement duplicative getters and setters.
Another comment mentions this is the third (serious) attempt at adding generics to Go.
Is there any concise history of these attempts (including this one)? I'd like to understand the gist of these proposals and what ultimately derailed them.
By themselves these proposals are pretty inscurtable.
The major difference is the first proposal separated interfaces from concepts, later proposals (very wisely) unified them. Only concepts could be used in type constraints.
They also switched from parenthesis () to square brackets [], thankfully.
Actually not that much, I remember 2 very different proposals, including this one.
The previous one was confusing as F and the antithesis of the simplicity Go claims it abides by. It felt very much like a plot to add generics without ever using the word generics anywhere and looking too much like Java/C#/...
The current proposal is basically what you'd expect from generics in a programming language, but a bit more limited.
It took basically 10 years, a generation of developers, to quell the opposition against generics in Go, to end up with generics...
They might even have unknowingly followed the ADA implementation except that Go's type inference makes them even easier to use.
> To use a generic type, you must supply type arguments. This is called instantiation.
Not familiar with Ada but the inability to stop running goroutines from outside it is annoying and leads to a lot of state telling them to exit. Sometimes I just want it to stop now.
It's not like on pthread where we have a trhead handler, that will complicate many things.
Like you said, We stop the go routine by telling it to exit the function, it's quite straightforward I think. Not more than a channel and switch case (+ context)
How would you expect that to actually work though? If a goroutine could be arbitrarily stopped at any point, that would trivially lead to degenerate program states. A lock could be held open or another goroutine could be waiting on it for a message.
I think this makes the third official attempt. I have high visibility into the process, and I think it's likely this one will stick. It'll likely take 2 releases (as Ian stated) to get it done.
I find the proposal interesting. Type constraints might help to reason about a given abstraction. If I understand correctly they behave quasi like sum types over interfaces.
As much as I've cursed the lack of generics and the limited expressiveness of Go's type system, it's hard for me to reconcile these proposals with what I know of Go. Go was conceived as a small language, a successor to C, and purposely eschewed "new-fangled" features of modern languages. Whether the result is good is a matter of taste, but I feel that retroactively bolting on a modern type system will simultaneously (a) undercut the simplicity of the core language and (b) produce a language that is not as clean as those conceived with generics from the beginning.
I don't think that Go ever was a very good successor to C. Go might have been conceived as a systems language, but simply because of the GC it can never ever fill that niche.
Go is a perfectly fine middleware language, but no C replacement. (Rust does much better in that regard).
C doesn’t lack safety by-design - it’s hobbled by its history and constraints imposed by its userbase - otherwise C would have major breaking changes more often.
C absolutely lacks type-safety by design. Otherwise what would malloc return, besides void*? How would you implement generic containers in C, without using macros or void*?
> Otherwise what would malloc return, besides void*
IIRC, historically void* didn't exist, and it used char*. It added void as a unit type, and absent historical baggage could add... let's call it "noreturn" as a bottom type:
#define NULL ((noreturn*)0)
noreturn* malloc(size_t);
void free(void*);
noreturn exit(int); // never returns
int main() {
// where (T)... is any expression of type T
void x = (T)...; // throw away the value
T y = (noreturn)...; // a nonterminating expression
void* p = (T*)...; // pointers convertible *to* void*
T* q = (noreturn*)...; // pointers convertible *from* noreturn*
*(noreturn*)...; // notionally, this should always fault
*(void*)...; // read zero bytes, so always fine
}
Rust and Go both aim to create safe, performant, modern languages suitable for use from the systems layer up, differing in some of their primary goals:
Rust is a sane C++: Zero-overhead abstractions, not afraid of language complexity.
Go is a modern C: Simplicity, stability.
They have some overlap in what they're best at, but they both take on unique and important missions that both need to be targeted, in our rapidly expanding universe of software engineering.
At this point it’s history: Go and Rust both got started around the same time - one by Mozilla and the other by Google. They’ve both reached critical-mass over the past ~10 years - so expecting one of them to disappear is like expecting Autodesk to choose between 3ds and Maya...
Once upon a time, C was a general purpose programming language--it wasn't always exclusively performance critical systems programming. Anyway, in practice you can use arenas or other techniques to alleviate GC pressure. In my experience, the GC isn't a big performance issue; rather, the Go compiler doesn't optimize as aggressively as the C compiler.
Back in the day it was simply the best option available. Java and Python only came out in the mid-90s and would take some time to catch on, develop a useful ecosystem, etc. And if you wanted to interact with system APIs, everything was in C. Nowadays there are much better options for a huge swath of applications. I would argue that--every bit as important as the language issues itself--one of the biggest reasons to avoid C is that all of its popular build system options are terrible. You're just expected to have the right dependencies installed on your system at the right versions and in the right locations (okay, some tools will try and find the location for various dependencies, but this is a pretty poor substitute for proper dependency management). And that's merely scratching the surface of the issues with C/C++ build tooling.
"If the proposal is accepted, our goal will be to have a complete, though perhaps not fully optimized, implementation for people to try by the end of the year, perhaps as part of the Go 1.18 betas."
Side tracking a bit: I wish there was a popular programming language like Go with rust-like package manager, Python style syntax and ability to hack, compilable, classic (classes, methods), and fast. Or I wish Go had classic OOP and raise Exception methods.
Basically, I want fast statically typed python with better package management. Or other way to put it, I want Go with classic OOP and Exceptions.
The carefulness of the Go team when introducing new features is remarkable.
After many years chasing the newest, shiniest and best tools I can't appreciate stability and a large and useful standard lib enough. I finally understood the importance of the boring stack.
I don't need generics in Go, but I'm happy they are coming. Especially for collection methods.
It's strange that they don't consider Print[T](x T) instead of Print[T any](t T). The "any" could just be omitted without loss of anything. Especially since repeated types with the same constraints indeed DO omit it! Print2[T1, T2 any]
well, it doesn't omit it; it just applies it to all preceding arguments that didn't have a qualifier, same as normal golang function arguments. It's still being explicit about the constraint being `any`
> Interface types used as type constraints can have a list of predeclared types; only type arguments that match one of those types satisfy the constraint.
Generics in Go are vastly different than templates in C++. They might be used for similar things, but whereas Go's generics actually build up on Go's structural typing, templates are ... something completely different again.
I mean; C++ templates are Turing complete. They are in the same ballpark as Scala's type machinery. And I say that with adoration.
This proposal is fundamentally different from C++ templates as they exist today. You don't just chuck a type in there and let the compiler have a go at it, SFINAE style.
Generics are awful. They solve no problem in the domain space, only the developer space.
Which then creates the problem of developers who insist on writing infinitely extensible generics with indecipherable bounds.
Just repeat code. You'll be fine.
If you find yourself repeating a LOT of code because Go does not support generics, maybe stop and think about your design. Putting generics in as an escape hatch will do you more than good, guaranteed
- https://news.ycombinator.com/item?id=20576845 (2019 draft)
- https://news.ycombinator.com/item?id=20541079 (also 2019 draft)
- https://news.ycombinator.com/item?id=23543131 (2020 draft, i.e. the base version of the current draft)