> I can't answer a design question like whether to support
> generic methods, which is to say methods that are
> parameterized separately from the receiver.
I work on the Dart language. Dart was initially designed with generic classes but not generic methods. Even at the time, some people on the team felt Dart should have had both.
We proceeded that way for several years. It was annoying, but tolerable because of Dart's optional type system -- you can sneak around the type checker really easily anyway, so in most cases you can just use "dynamic" instead of a generic method and get your code to run. Of course, it won't be type safe, but it will at least mostly do what you want.
When we later moved to a sound static type system, generic methods were a key part of that. Even though end users don't define their own generic methods very often, they use them all the time. Critical common core library methods like Iterable.map() are generic methods and need to be in order to be safely, precisely typed.
This is partially because functional-styled code is fairly idiomatic on Dart. You see lots of higher-order methods for things like manipulating sequences. Go has lambdas, but stylistically tends to be more imperative, so I'm not sure if they'll feel the same pressure.
I do think if you add generic types without generic methods, you will run into their lack. Methods are how you abstract over and reuse behavior. If you have generic methods without generic classes, you lose the ability to abstract over operations that happen to use generic classes.
A simple example is a constructor function. If you define a generic class that needs some kind of initialization (discouraged in Go, but it still happens), you really need that constructor to be generic too.
> You see lots of higher-order methods for things like manipulating sequences. Go has lambdas, but stylistically tends to be more imperative, so I'm not sure if they'll feel the same pressure.
`range` is generic, and by virtue of that is a builtin which only works with a subset of the also builtin magically generic types.
The reason why Go "does not feel the same pressure" has nothing to do with its imperative style[0] it is because they special-cased a few generic structures as builtin very early on, unlike, say, Java (which only had arrays as a typed datastucture).
[0] Java and C# are could hardly be more imperative, hell Java is only just adding anonymous functions
They also sucked as an interface (for developers to use), resulting in them only being used when no other option was available (Comparable) or for cute hacks (double-brace initialisation).
Absolutely correct. Lambdas are just shorthand for anonymous inner classes which implement an interface with only one method -- aka single abstract method (SAM) types.
For instance, you have two functions. One takes `Function<Type, Type>` and the other takes `UnaryOperator<Type>`. Giving the "same" lambda to both functions will result in two anonymous inner types, one implementing both interfaces.
They're not exactly shorthand. They use `invoke_dynamic` under the hood, which defers the creation of the lambda til it is used. Conceptually they are, but in practice they're a little different.
Ditto on generic methods both changing the game for Dart and being necessary. I spend most of my time in Dart, Swift and ObjC, and I've built serious applications in Erlang and Java. All of which I like for different reasons.
My opinion is that Dart's type system is the optimal type system. It allows for statically typed API surface, but at the same time, I can still add some dynamic voodoo under the surface and enable productive meta-programming.
As a far as I can tell, an "optional type system" is just an unsound type system with a marketing name. Any decent static type system would allow one to progressively increase the level of typing, the weakest form being using a Variant type to hold all other types. The advantage here is that any invariants/proofs that are captured in the system are not compromised by unsoundness.
> As a far as I can tell, an "optional type system" is
> just an unsound type system with a marketing name.
It is a term with a specific meaning. The key bit separating it from both gradual typing and a static type system with a dynamic type (like Scala or C#) is that in an optionally typed language, the type system has zero runtime effect. If you were to take a source file and mechanically strip out all of the type annotations, the resulting program behaves identically to the original one.
Dart 1.0 was an optionally typed language. Dart 2.0 is not -- we've gone to a mandatory static type system (with lots of inference and a dynamic type). The limitations of optional typing are simply too onerous, and it's just not what users expected or want.
> Any decent static type system would allow one to
> progressively increase the level of typing, the weakest
> form being using a Variant type to hold all other types.
That's the dream of optional typing and gradual typing. In practice, I have yet to see a language that does this harmoniously. It sounds simple, but things quickly get very complex when you start working with generic types and first-class functions because the boundary between the untyped and typed code gets very strange. To keep the guarantees you expect inside your typed code, you have to figure out how to handle untyped data that flows into it. That gets nasty very quickly when that untyped data may be an instance of a generic class or a function.
> To keep the guarantees you expect inside your typed code, you have to figure out how to handle untyped data that flows into it.
This would be solved in a conventional static type system by forcing the user to eliminate the variant, and forcing the user to deal with the case that it has an unexpected value.
So I cannot see why even C# dynamic is necessary, except to perhaps save some thinking, but this isn't a good thing IMHO.
The dynamic type was added to C# to facilitate interoperability (ex. DLR), not so much to encourage lazy devs to get their bugs at runtime rather than compile time.
> I still do not understand why this cannot be done via a Variant type.
Makes little difference, either way you need a magical type since it has to allow any method call at compile time, and then requires enough RTTI that it can check for and dispatch method calls at runtime. Whether you call it Variant or dynamic has no relevance.
The problem munificent was talking about is that for more complex types you cannot write that type checking predicate you want, which would check if a value belongs to the given type and return a variant-free version of the object. For example, if you have a reference to an array of integers in the typed part of the code you need to protect against untyped parts of the code inserting non-integers inside the array. The only solution is to insert typechecks in the reads and/or writes to the array (there is more than one way to do it, with different tradeoffs in expressivity and performance). A similar problem happens with functions: if you try to give a static type to a function with a dynamic implementation you can't just make the check at that point. You also need to insert type checks after every call, to see if the return value has the expected type (and again there is more than one way to do it...)
I assume that my downvoters do not believe this can be done in a conventional static type system. So I will try to explain further.
> The problem munificent was talking about is that for more complex types you cannot write that type checking predicate you want, which would check if a value belongs to the given type and return a variant-free version of the object.
The complexity of the type should not matter to a sound type system. Assume a "fromVariant" function that unpacks the variant, it has a type: forall a. Variant -> Maybe a. Maybe is either Nothing or Just a, depending on whether the variant does indeed contain what you expect it to contain.
> if you have a reference to an array of integers in the typed part of the code you need to protect against untyped parts of the code inserting non-integers inside the array.
This "protection" is exactly what static type systems do. "Untyped code" in the context of using a static type system, is code that works with variants. So we need to pack the array of integers into a variant, where they will be suitably tagged by the runtime. The "untyped code" is free to write whatever it wants into a variant, so we must unpack it with runtime checks. The static type system forces us to unpack and assert its contents, before we can assign a specific type to it.
> The only solution is to insert typechecks in the reads and/or writes to the array
I am not sure what you mean here. When the array is packed into a variant then it can indeed be modified into anything by "untyped" code. But it should always be tagged correctly, which is the responsibility of the runtime.
It will need to be re-asserted when unpacked into a specific type.
> A similar problem happens with functions
Functions should be values like any other, so my points apply to them too.
The point I have been trying to make, is that a conventional static type system will already allow for "gradual typing". I cannot see a need to use an unsound type system in order to achieve this goal.
Dart's choice of an unsound type system doesn't actually matter for those gradual typing problems we were talking about. I think you are getting too caught up in that point.
Anyway, going back to the topic, I'll try to explain why your idea doesn't work.
> assume a fromVariant function
What I was trying to say on my other post is that you can only create such a function for primitive types (numbers, booleans, strings, etc). And perhaps also for objects, if your language has something analogoud to an instanceof operator.
But for the higher order stuff like functions and mutable references you cannot write such a fromVariant function so easily! Suppose I write down a dynamically typed implementation of the identity function and pass it to some typed code that expects an int->int function. When we pass the function to the typed side we will call fromVariant on it but the best we can do at first is check if the value we have is a function (and not an integer, etc). There is no way that fromVariant can also check the function signature, to guarantee that the dynamically typed implementation will always return an integer when given an integer.
One way to solve this problem is for fromVariant to return a wrapped version of the dynamic function that checks if the returned value is of the expected type. But these wrappers can slow down the program a lot and they also mess up with pointer equality...
> the arrays
The basic problem with the arrays is similar to the one with function. The statically typed code can't blindly trust that the array will always contain the type it expect because at any moment the untyped portion of the code could try to insert junk values into the array. You will either need to do a runtime check when the dynamic code writes to the array (which slows down the dynamic code) or you need to add runtime checks when the static code reads from the array (which makes it slower than it would be in a fully static language).
Things also get extra tricky if the array contains non primitive values (functions, other arrays, etc)
---
If you are interested I can send you links to a couple of gradual typing papers. As munificent said, there is currently lots of ongoing research about this and we all wish it were as simple as you were suggesting it would be :)
> Dart's choice of an unsound type system doesn't actually matter for those gradual typing problems we were talking about. I think you are getting too caught up in that point.
I was trying to show that "gradual typing" should be possible in any reasonable static type system, without resorting to unsoundness. I do concede that there is probably a definition of the term "gradual typing" out there that I am not strictly adhering to.
> Suppose I write down a dynamically typed implementation of the identity function and pass it to some typed code that expects an int->int function. When we pass the function to the typed side we will call fromVariant on it but the best we can do at first is check if the value we have is a function (and not an integer, etc).
If the best we can do is tag it as a function, then fromVariant would unpack it to a function with type Variant -> Variant, which represents all dynamically-typed functions. When we call this function, we indeed would need need to assert the outputs.
> One way to solve this problem is for fromVariant to return a wrapped version of the dynamic function that checks if the returned value is of the expected type
Agreed. Although the danger of this is that you have a bomb waiting to go off inside some unsuspecting piece of typed code. The alternative to to acknowledge that it is a partial function in the type, e.g. the Variant -> Variant function becomes Int -> Maybe Int, or similar.
The point is that we are solving these problems using conventional static typing. You haven't yet convinced me that this doesn't work.
> You will either need to do a runtime check when the dynamic code writes to the array (which slows down the dynamic code) or you need to add runtime checks when the static code reads from the array (which makes it slower than it would be in a fully static language).
It sounds like you are concerned with performance and efficiency, that may indeed be an issue, but it's orthogonal to the type system. My own position is that if one wants performance, then don't write untyped code!
> If you are interested I can send you links to a couple of gradual typing papers. As munificent said, there is currently lots of ongoing research about this and we all wish it were as simple as you were suggesting it would be
Thanks for the offer, I am definitely interested. I suspect that the outstanding issues are around usability and efficiency. I stand by my original point that "gradual typing" should be possible in any sufficiently expressive static type system.
> I do concede that there is probably a definition of the term "gradual typing" out there that I am not strictly adhering to.
The basic goal of gradual typing is that you can have your program be fully untyped or fully typed or somewhere in between and it will behave in a sensibly in all cases. Adding types to a program does not change the result of the program, except that sometimes adding types may cause new type errors to show up.
> the best we can do is tag it as a function, with type Variant -> Variant
This is very limiting. With this restriction, you cannot assign an accurate static type to a function with a dynamically typed implementation, which means that you can't use dynamically typed libraries from inside statically typed code.
Similarly, you cannot use statically typed objects inside the dynamically typed code because all objects that can be touched by the dynamically typed part of the code must have all their fields typed with the Variant type.
> Although the danger of this is that you have a bomb waiting to go off inside some unsuspecting piece of typed code
Technically the bomb is inside the dynamically typed part of the code :) One of the nice things you can get with a well though out gradual typing system is a guarantee that the line number in any of the type error messages will point to the dynamically typed part of the code. We call this blame tracking.
> It sounds like you are concerned with performance and efficiency, that may indeed be an issue, but it's orthogonal to the type system
Yeah, but one of the big challenges with gradual typing today is that i we try to be flexible (letting programmers add types wherever they want) and sound (with a soud type system, blame tracking, etc) all the implementations so far have had very bad performance, often running much much slower than the untyped version of the code would (with factors like 5x or 10x slower being common).
When the performance gets this bad it effectively means that your only options are to have fully typed code or fully untyped code, because almost anything in between will run up super slow. Which kind of ruins the whole point of being able to gradually add types to your program.
> Thanks for the offer, I am definitely interested.
You can get very far by searching google scholar for "gradual typing. One good place to start out would be Siek and Taha's original paper on gradual typing for functional languages[1] and Sam Tobin-Hochstadt's PHD thesis on Typed Racket[2] (which used to be called Typed Scheme back then). For an example of the challenges of gradual typing in practice, this recent paper [3] might be interesting.
Thanks for the references. It looks like "gradual typing" is about trying to retrofit types to dynamic languages. That is certainly harder than
starting with a statically-typed system and adding dynamic types (my preference). The latter clearly works, as evidenced by e.g. C# and Dart 2.
> With this restriction, you cannot assign an accurate static type to a function with a dynamically typed implementation, which means that you can't use dynamically typed libraries from inside statically typed code.
It depends what you mean by an accurate type. A accurate type for adapting your example would be to expose Variant -> Variant as Int -> Maybe Int, thus capturing the partiality. I concede that Int -> Maybe Int cannot be used in a function that expects Int -> Int, but that is how it should be! An untyped language cannot provide me with a function of Int -> Int.
Indeed, gradual typing and optional typing are more along the line of adding types to a dynamic language. The kind of dynamic typing you were talking about looks more similar to the Haskell's Data.Dynamic type or the "dynamic" type in C#. It works great to add some dynamicity to the language (for example, to interoperate with a dynamic API, or to code metaprogrammey stuff like reflection and serialization) but you can't use this to bridge the full continuum between static typing and dynamic typing, which is the design space that languages like Dart and Typescript are trying to tackle.
> It depends what you mean by an accurate type.
The gradual typing literature tends to prefer having partial functions (that might raise well-behaved type errors at runtime) because it eases the transition between typed and untyped programs. You can add or remove type annotations without having to simultaneously add and remove tons of explicit pattern matches against Maybe. But this isn't the part I was talking about.
The key issue is that when you cast a function from (Variant->Variant) to (Int->Maybe Int) you need to add a small wrapper around the dynamic function that converts the input from Int to Variant and converts the output from Variant to Maybe Int. In theory it is simple but in practice all those wrappers can add up really quickly...
Functions are the simplest case where these higher-order type-conversion issues pop up but in my experience things are actually the most thorny when it comes to mutable objects and arrays. The extra type checks and conversions around every property read and write can add a ton of overhead. And from a semantics point of view you need to be careful because if your implementation is based around wrappers it could end up messing with object identity (like Javascript's === operator or Python's "is")
This is exactly right. It's a subtle, complex problem that isn't apparent until you start really digging into these kinds of type systems. It's really hard to define the right membrane around the untyped code and values as they flow through your program into typed regions while giving you the safety and performance you expect inside the typed code.
I respectfully disagree. The "right membrane" around any untyped value is a Variant type. A static type system would then stop you making any assumptions without unpacking and asserting its type. If we don't want variants and just want to defer type errors until runtime, then this is possible too (GHC Haskell can do this).
I did not mention GHC's Dynamic type. I mentioned GHC's deferred-type errors as an alternative technique to "gradual typing". I believe there were plans to implement a polymorphic version of Dynamic, but it obviously wasn't a high priority. Note that GHC does not require any special type-system extensions to support Dynamic.
No, there's more to it. There's been some research in recent years on the actual benefits of types and the results are less straightforward than some people think.
An important aspect of type annotations, for example, seems to be that they help document the API of a module, regardless of whether they're statically checked. This is especially relevant with respect to maintenance tasks.
Conversely, the benefits of static type checking to get correct code initially seem to be a lot murkier. There seems to be a cost associated with static types and that is that it takes longer to write code; time that can be spent in a dynamically typed language on other validation work (remember that types do not even remotely capture all software defects).
This means that it is definitely worth exploring the benefits of optional/gradual/soft typing vs. static typing and/or runtime type checking vs. compile-time type checking.
Conversely, the assumption that static types are an unalloyed good is not currently borne out by the evidence (in part, of course, that is because the evidence is inconclusive).
There seems to be a cost associated with static types and that is that it takes longer to write code;
But this is not the only cost that matters, indeed might not even be a cost.
I've gone from being neutral about static vs. dynamic types to being pro-static types -- and the change happened when maintenance became a bigger part of my job.
Writing new code is now far less important to me than looking at a small part of a large system and trying to understand what is and is a not possible at that point. Static typing does not make this easy, but dynamic typing makes it far more difficult.
> But this is not the only cost that matters, indeed might not even be a cost.
I'm not saying otherwise. My point is that there's no objective way to proclaim one better than the other. This depends on application domain, economic constraints, engineering constraints, what you're doing, and so forth.
Writing Ada software that controls a pacemaker has totally different requirements than exploratory programming in Jupyter that mostly deals with integers, floats, and arrays and matrices thereof, for example.
> I'm not saying otherwise. My point is that there's no objective way to proclaim one better than the other.
Very true. But any analysis that emphasizes writing code over maintaining it will systematically bias itself in favor of dynamic typing.
Interestingly I have had the converse debate with some of my colleagues, who have learned to hate Python because they keep having to debug existing systems. I try to tell them that it is an excellent language for the kind of one-off data-analysis that I did when I was a scientist.
They don't believe me, because here among software engineers, seemingly innocent 400 line scripts keep growing into giant, decade old, 100kloc typeless monstrosities.
> Very true. But any analysis that emphasizes writing code over maintaining it will systematically bias itself in favor of dynamic typing.
This is not what the studies do. There is no emphasis on anything. They look at how people do on a number of different tasks with different typing options and report the results.
Also, it's not just dynamic vs. static typing. Gradual and soft typing is also of interest, because it allows you to turn dynamically typed code into statically typed code without a complete rewrite.
Type annotations are brilliant for documentation, and it does matter if they're checked or not, because otherwise they're just comments, and we all know comments become wrong over time. If the compiler doesn't enforce your type annotations you can't trust them.
The sweet spot for me at the moment is a language which is fundamentally static but has good type inference so that you don't actually have to mention types all the time.
Haskell's pretty good at this, although because it's possible to write many Haskell programs without actually mentioning a single type anywhere it can be a pain to read the results. Rust deliberately constrained type inference so that you have to have type annotations on functions, which makes sure your interfaces don't get too murky when reading the code.
I'll go with the idea in another reply that static typing really starts to show its benefit in long-lived code bases under maintenance.
Certainly a static type checker makes refactoring a lot easier, in my experience.
> Type annotations are brilliant for documentation, and it does matter if they're checked or not, because otherwise they're just comments, and we all know comments become wrong over time. If the compiler doesn't enforce your type annotations you can't trust them.
Note that I wrote statically checked. You can also check them at runtime (basically as a form of Design by Contract), which is what Dart does at the moment.
You could, but why should you spend the cycles on that at runtime when you could've done it just once in the compile phase and errored out at a useful time instead of six weeks later when you hit the one branch you neglected to write unit tests for?
Because designing a type system that is sound and expressive and simple is bloody hard. Haskell frequently has a number of `{-# LANGUAGE ... #-}` directives at the top of source files, OCaml has acquired no less than six different ways of doing runtime polymorphism: as an extreme case, the module system has morphed into a fully functional OO system in addition to the one OCaml already has.
Runtime type checks allow you to greatly simplify the difficult parts of the type system or make it more expressive.
People often forget how many expressiveness shortcuts we take so that our type systems keep working out. We have integers (which are in actually practice often just elements of a finite ring), but introduce natural numbers and suddenly a lot of static type checks stop working out. How about interactions between the natural numbers with and without zero? Integers modulo your wordsize vs. infinite precision integers? Can you statically check overflow or underflow? Dart has both `num`, `int`, and `double` types, and `int` and `double` are subtypes of `num`, but `int` is not a subtype of `double`; readAsBytesSync() returns an `Uint8List` that conforms to `List<int>`. From a mathematician's perspective, computer science type systems are usually painfully simplistic. They usually capture something about the representation of the data, not its actual mathematical properties. Their role is often is to prevent undefined behavior, not to capture the actual semantics. Computer science typing often tends to be opportunistic rather than semantically oriented, based on what's easily possible rather than what one wants to express (see the entire family of covariance issues).
As an extreme example, consider GAP's [1] type system. We're dealing with computer algebra here, and some of the types simply cannot be captured at compile-time. Any type annotations would have to be checked at runtime, other than the trivial ones (and even integers are basically a subtype of the rationals or any polynomial ring with integer coefficients).
If I want to do effective implementations of matrices and vectors over finite fields, I pack several finite field elements into a machine word and extract them; no static type system in existence has even remotely the power to statically typecheck these operations for me, unless layered over a ton of unsafe code, and then only for some languages.
From the computer science side of things (I have a background in formal methods), type systems are underpowered specification languages that just pick a small subset of a formal spec that happens to be convenient to prove. They can express only a subset of program behavior (compare what types in even (say) Haskell do to a full-blown formal specification in Z) and often not even the interesting ones. As a result, I don't particularly stress out over whether checks only occur at compile time. Runtime type checking is simply a different tradeoff in that you move from the set of types that you can build a sound type system around to predicates that can be expressed at runtime. It's simply a different set of tradeoffs.
> There seems to be a cost associated with static types and that is that it takes longer to write code
Perhaps. As someone who has used Python and Haskell for roughly equal amounts of time in my professional career I definitely find Haskell faster to write in.
mypy is getting good (from haskell's perspective that would be 'barely passable', still a great improvement from what it used to be), been using it actively for the last year and advocating it for 6 months and seeing nice ROI even considering it's warts.
I don't think I've seen a Variant type in any language that doesn't store some form of type information at runtime (unless you count C void pointers, which I don't), and "any decent static type system" incurs no runtime overhead in the common case.
Of course variants would need to tag their payload, but this is what all dynamic languages need to do anyway. My point was that such dynamic behaviour can be done without resorting to an unsound "optional type system".
Isn't Dart compiled to JavaScript? As far as I can tell by playing around with DartPad, a normal int in Dart carries no runtime information indicating that it is an int (other than the fact that it is a JavaScript-level int).
If you wanted to add variants to Dart, you'd either need to add this runtime information, or limit the Dart typesystem to be no more and no less than what the JavaScript typesystem already stores dynamically.
It tags numbers, yes, but actually it doesn't tag ints in particular - there is no integer type in javascript, just a 64-bit floating point one. (This is why asm.js code sticks |0 after every arithmetic operation involving integers: that forces rounding the number as if it were an integer, and therefore allows optimizing the expression to use integer instead of floating-point arithmetic.)
If Dart wants to distinguish ints and doubles, it needs to keep track of that on its own; if it has a JavaScript variable x and needs to know calculates x/2, it won't know whether to round it as if it were integer arithmetic, or return a floating-point value. Usually it is enough to do this at compile time (the function calculating x/2 knows whether it's halving a Dart-level double or a Dart-level int), and so there's no need to make x an object that remembers its own Dart type.
Also, even if JavaScript distinguished ints and doubles, a higher-level language is likely to want to have multiple kinds of things with an int representation: bools, enums, bitfields, Unicode scalar values, etc. Again, ideally that information is tracked at compile time and the knowledge of what to do with an int representation is inserted into the compiled code, and so no metadata needs to be stored at runtime.
Writing well-typed code means writing a proof of some set of properties over the code. There are three assumptions here:
(1) creating abstractions that are typed correctly is easy
(2) the existing abstractions you're building from are correctly typed
(3) the properties that the type system proves are valuable enough to reward the effort
Whether (1) is true depends on your type system. If it's very simple (e.g. dynamically typed), it's trivially true. If it's rich and expressive, it can sometimes be impossible to express your intent without obfuscating your types (e.g. using pointers and typecasts or polymorphism or some other loophole); and with some type systems, it's plain impossible to express some valid programs.
Whether (2) is true is a social problem. You need to have enough sensible people in your community such that you get to use List<T> rather than ObjectList with typecasts, or what have you. What's available in the runtime libraries often translates into what's commonly used for APIs in third party modules, and if the library of types isn't rich enough, APIs will use loopholes or typecasts or other mechanisms that obfuscate types. Third party code turns into a kind of firewall that stops your types (and the information, proofs, they convey) from flowing from one module to another in your program.
Whether (3) is true also depends on your type system, but also on your application domain. It's in direct opposition to (1); the more you can prove with your type system, the harder it normally is to write code that proves exactly the right things, and neither more nor less (the risk is usually too little type information flowing through). If the properties being proved are important to your application domain, then that gives a big boost to type systems that can prove those properties. If it's just because it gives you warm fuzzies, and testing is what proves interesting things because interesting properties (in your domain) are harder to prove with a type system, then rich type systems are less useful.
Personally, I was once a strong proponent of rich expressive type systems. But I eventually learned that they can become a kind of intellectual trap; programmers can spend too much time writing cleverly typed confections and less time actually solving problems. In extremis, layers of abstraction are created to massage the type system, often in such an indirect way that the value of the properties being proven are lost owing to the extra obfuscation. Now, I see much more value in a strong type system for languages like Rust, but much less value at higher levels of abstraction. That's because many of the properties being proven by Rust (like memory safety) are already proven in high-level dynamic languages through GC and lack of pointers and unsafe typecasts. Whereas application-level properties being proven through type systems are seldom valuable.
Documentation and tooling are much stronger arguments for types in high-level languages than proofs; ironically, this means that the type systems don't need to be sound, as long as they're usually OK in practice!
I should send this to rsc, but it's fairly easy to find examples where the lack of generics caused an opportunity cost.
(1) I started porting our high-performance, concurrent cuckoo hashing code to Go about 4 years ago. I quit. You can probably guess why from the comments at the top of the file about boxing things with interface{}. It just got slow and gross, to the point where libcuckoo-go was slower and more bloated than the integrated map type, just because of all the boxing: https://github.com/efficient/go-cuckoo/blob/master/cuckoo.go
(my research group created libcuckoo.)
Go 1.9 offers a native concurrent map type, four years after we looked at getting libcuckoo on go -- because fundamental containers like this really benefit from being type-safe and fast.
(2) I chose to very tightly restrict the initial set of operations we initially accepted into the TensorFlow Go API because there was no non-gross way that I could see to manipulate Tensor types without adding the syntactic equivalent of the bigint library, where everything was Tensor.This(a, b), and Tensor.That(z, q). https://github.com/tensorflow/tensorflow/pull/1237
and https://github.com/tensorflow/tensorflow/pull/1771
I love go, but the lack of generics simply causes me to look elsewhere for certain large classes of development and research. We need them.
All of Go's built-in pseudo-generic types (e.g. maps) require special support from the parser. I'm not sure if they plan on doing that for sync.Map as well, but this is clearly an area that could benefit from generics.
For (2), are you looking for overloading arithmetic operators (+, -, etc.)? Do people normally consider that under the umbrella of "generics"?
For (1), for curiosity sake, I tried benchmarking the cuckoo.go file you posted. I'm curious if these numbers are in line with what you found. The first test I did was a Rand test, Putting 10,000 random string values under random string keys, then Getting the same 10,000 keys, then Getting 5000 more random keys. The first line below uses default code you posted, which uses
type keytype string
type valuetype string
The second line specializes the code to just using the string type directly for keys and values. The third line uses interface{} instead for values, which is the code style HN doesn't like.
That third line shows the cost of casting to/from interface{}. The difference in the code was about as you might expect, with the "void" example needing a cast on each put and a cast on each get.
The 9.4ms result is using a version of your code hand-specialized for string key and uint32 values. The 10.4ms result uses interface{} for values.
I ran a sequential-insert test as well, with keys and values generated sequentially instead of randomly. The results are much the same, though the cost of casting to/from interface{} seems to get mostly lost in the noise.
I also tried some tests using uint32 as the key type. This int key specialized version is 3x faster than any of the string key versions, but it's not really a fair comparison... a small part of the code is specific to string keys (getinthash), so it's not as obvious what the generic equivalent of that function would be. The keys need to be Hashable or some such, not just interface{}. I'm also allocating and formatting random strings in one case, vs just picking random numbers in the other case.
I didn't find the interface{} casting in any of the above code too horribly "gross", just a bit ugly, but that's entirely subjective. Yes, all of the cuckoo versions seemed about 30% slower than the corresponding builtin map type, but my benchmark numbers don't show if that is the price of boxing, of supporting concurrency, or maybe function call overhead, or something else. I'm guessing you did more detailed benchmarks that point to the source of the slowdown.
One last comment: keytype and valuetype really need to be exported, don't they? I wasn't able to use the library without changing them to exported symbols.
> For (2), are you looking for overloading arithmetic operators (+, -, etc.)? Do people normally consider that under the umbrella of "generics"?
Overloading is usually not a "generics" concern in most mainstream languages[1]. If you do a "type class"/"Rust-style-traits" (or even multiclasses) thing then it sort of naturally takes a front seat. Incidentally, Haskell fucked this one up horribly in some respects, see the "Num" type class.
Of course, e.g. Java could theoretically add an interface called Addable to complement Comparable which would sort-of allow generic "+", but they've chosen to not opt for that. The fact that the "first parameter" is privileged is also a bit of a detriment to such syntax conveniences. (I can expand on this, but it's bascially because that for "X op Y" and "Y op X" to work, they both need to know of each other. This problem can be solved by type classes, but cannot be solved by overloading, AFAIK.)
[1] If you do have overloading from the outset (like Java, IIRC), then you're constrained because you need to be able to "pick the most specific overload" in a "generic" context.
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.
This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues. I wonder if Go's historical antipathy towards this feature has driven away most of the people who would want it, or if there is still enough latent desire for generics that serious Go users will be able to produce the necessary mountain of real-world use cases to get something going here.
Several members of the Go team have invested significant effort in studying generics and designing proposals, since before Go 1.0. For example, Ian Lance Taylor published several of his previous efforts, which had shortcomings he was dissatisfied with.
I believe your impression of the Go team's position has been corrupted (likely unintentionally) by intermediaries.
While "fuck generics" might be a low-resolution characterization, it's not entirely inaccurate, and the efforts of Taylor (and perhaps others) notwithstanding, as long as anyone is writing things like "I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve," there's reason to believe it's apt enough.
As other respondents to the GP comment show, it's not at all hard to come up with that ostensibly elusive picture if you're attending to it. So I suppose the obvious takeaway is that they're still not really attending to it. Which is fine, Go clearly has its niche and its fans, and people who don't want a pretty good Blub have other options.
This is frustrating. Substituting an insult for politely declining (while leaving the door open) is not "low-resolution" or "apt enough", it's almost entirely inaccurate. Politeness matters.
"low resolution" is a perfectly accurate description of "fuck generics", which like any two-word phrase can't entirely capture the attitude of the golang team, but nevertheless does seem to capture something of gestalt of their approach.
Finer resolution would be if fusiongyro had characterized the golang team as saying "fuck off developers who want generics," which wouldn't have been as polite as the phrase "I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve," but whose semantics are otherwise equivalent.
No. Unless someone finds a quote, the low order bit is that they didn't use the word "fuck" or any other swear word at all. You can't use the word "fuck" to accurately quote someone who didn't use that word.
He's explicitly rude. Sets up a strawman argument with the developer who can't imagine programming without generics. Mocks the people who want generics. Gasps in faux disbelief that it could be solved by types.
And then, when he goes in for the kill, to show once and for all that generics are pointless garbage:
He attacks type hierarchies and inheritance... completely missing the point by conflating parametric polymorphism and subtype polymorphism.
I think "Fuck generics" does a decent job of summarazing the rudeness, misinformation and ill-thought out arguments that the article contains about generics.
If you think you know how I ought to spend my time, come over and mow my lawn while I expound on the problems of dev entitlement culture.
I'm fine with RSC, Commander Pike, and those other benevolent dictators continuing to make much better decisions that I certainly ever would and more importantly to be super careful with what they add to the language. I write Swift, JS, objC, and Go. Go is the preferred language because it's simple and I have the entire SDK in my head.
I use Go a lot. I can't say i've ever run into not having generics being a huge deal.
Let's face it, for most programmers out there, generics is for type safe collections. If slices and maps weren't built into Go then i'd prolly have moved to something else.
That said i'd like to see more built in data structures
>You can't use the word "fuck" to accurately quote someone who didn't use that word.
Actually you very much can. It's the perfect word to describe certain attitudes, even if the person didn't use the word. It just translates to "who cares", "we don't need no stinking Generics", etc.
Bringing TOOWTDI from the realm of Python and Go to English, I see.
I suppose the argument I'm making does indeed depend on the premise that essential semantics of "fuck" can be conveyed in other ways. If you like, we could explore this by example.
My impression has always been that the Go core team is very open to the possible addition of generics, but also very wary of the very real downsides that generics have. Nothing I've ever seen from Rob or other core team members contradicts that. I'm open to citations if you can give any.
In contrast, a lot of Go users and proponents seem to be vehemently against generics, for wildly differing reasons. Some seem to be against them just for the lulz, others for crazy conspiracy theory or obscure philosophical reasons.
Me, I miss generics sometimes, but not enough to get riled up about it. I hope the Go folks come up with a clean, simple, and efficient way to get them, but until then I'm happy with the features Go has.
Yeah, god forbid someone is not absolute in their statements, but gives the benefit of the doubt. I'd say it only counts as "weasel" word when you don't say what you mean to say or want to hide your true intention/reality. But, for all the "maybe" etc, the parent made clear what he thinks: that the Go team doesn't care about Generics. He's not going two ways about it, for this to qualify as "weasel" wording.
>My impression has always been that the Go core team is very open to the possible addition of generics, but also very wary of the very real downsides that generics have.
Ten years on, and just some a handful of posts and "proposals" (not bigger than 1000-2000 words) written, for something that should have been there for the start (and for other languages with even less resources, like Julia, Nim, Rust, Crystal, Haxe, etc, has been), does not look like "very open".
I'd get the "We have a certain philosophy about Go, with which Generics don't match". Or "we don't like them, deal with it". But this back and forth, and ifs and buts ad inifinutum borders on the passive-aggresive treatment of the issue.
The "very real downsides" are the same old engineering tradeoffs every language had to make. "We'll only add them when we can have our cake and eat it too" is a weasel approach.
>Nothing I've ever seen from Rob or other core team members contradicts that. I'm open to citations if you can give any.
If by citations you mean "all talk but no walk" -- and even the talk being always hesitant and amounting to "we'll see", "if there is some magic way" (and the perennial "we're always open to it"), then yes, that has been a constant for a decade.
Now, I've just written ~3-5K lines of Go, and my company has just a few K or 10K lines of Go in production (mostly a C shop). But while I like a lot (static compilation for easy deployment, concurrency, standard library, go fmt, etc), one of the things that stops me using it more is the Generics situation.
I always assumed the Go developers were more worried about the effect of this feature on compilation time than binary size. After all, Go shipped static binaries only for years. But making the type checker more intelligent is going to imply more time spent type checking. It could also add time to parsing unless they choose the syntax carefully.
Do generics really add much compilation time? They sure do in C++ but that is because type checking C++ generics is based on substitution into the method body, which can have other generic calls inside of, which can lead to an exponential cascade and therefore exponential time compilation. Modern generics do not do that: a generic method invocation can be checked based on the type signature of the method alone.
For code generation you can do what Java does and erase generics (i.e. ignore them), in which case it will be no slower than code generation for manually erased code, which is what programmers have to write now. For good run-time performance you'd want to do specialisation, which may take longer but not much longer if the compiler is clever enough not to recompile the same method many times. C# does that and its implementation is actually fast enough for a JIT, let alone ahead of time compilation.
Even if you don't care about binary bloat (and I do care), it can introduce some of the same problems as dynamic typing.
If I am debugging some C++, and I find myself looking at templated function, then I can't easily see what types the template parameters are. Also it becomes hard to navigate to method calls etc.
C++ is a particularly bad example, because template metaprogramming is essentially typeless. But any form of polymorphism introduces these problems to some extent.
To be fair, that includes the run-time polymorphism already available in Go. Thus if generics allow us to replace some `interface{}` hacks with properly typed middle-men (even if they compile down to `interface{}`) then that would be genuine benefit.
Since C++ monomorphises every single template instantiation in a program, your debugger certainly should be able to tell you what the type parameters for that instance were!
A sufficiently smart linker will make that harder by removing functions that, byte for byte, are identical to one they already included in the executable ("identical code folding"; both gcc's gold linker and visual studio support that).
In my line of work I am very rarerly actully looking at a running executable with an interactive degger.
More often I am navigating through source code comparing it to logs, stack traces and other evidence that I can grab of what went wrong in production.
The source-level is important, because although I said "debugging" in my comment, what that very often comes to is first figuring out the intent of some other engineers from the thing they wrote -- and what they wrote is the source code, not the program state.
Did he compare those shortcomings with the shortcomings of not implementing generics at all? Because that is the tradeoff here. After decades of research, a decade of go, millions of lines of public and private code, a decade of feedback, and dozens of languages with generics, it's obvious that the perfect generic system isn't going to just happen, so it's not perfect generics vs imperfect generics, it's no generics vs imperfect generics.
Except they seem to focus only on Java, .NET and C++, forgetting that generics were initially implemented in CLU back in 1975, and there were several programming languages since those days that had some form of generics being designed into them.
FWIW I have Liskov and Guttag (1986) on my desk. I haven't forgotten. I also explicitly address the "several since those days" in https://research.swtch.com/go2017#generics.
I liked using it at the university back in the day, and didn't had to repeat code.
Eiffel is a very powerfull language with good tooling, just failed adoption, because licenses were too expensive and software industry still has issues valuing quality.
Interesting to hear your experience. I had read a good amount about Eiffel and also large parts of Bertrand Meyer's book Object-Oriented Software Construction (quite a thick book too, and which uses the syntax and semantics of Eiffel, or something close to it, IIRC [1]), some years ago. Had found the language very interesting and also was impressed by it (and by the book). Read some case studies / success stories about it, which were good too. He and his team seem to have taken a lot of pains and thought a lot about the design as well as implementation of the language, from both an academic and industrial use perspective - so it seemed to me.
There is also an interesting anecdote about use of Eiffel in the book, here:
With Eiffel tools you got the IDE with interactive development, including a VM for rapid prototyping. Then you would use the AOT compiler (via compilation to C) to produce shippable binaries.
So combining the easiness of interactive development with performance when it was time to ship the product.
It was also available before Java was a thing.
This is what I always kind of missed with Java and .NET, the previous generation of languages (Eiffel, Modula-3, Oberon(-2)) all had a mix of interactive development and AOT compilation for shipping release builds.
You seem to imply that Eiffel's solution was superior to Java's but nothing could be further from the truth.
The Eiffel compiler required four distinct sequential processes (note: not phases. I really mean that four programs needed to be run in sequence, and each of these four programs implemented who knows how many passes). Eiffel generated C++, huge executables, was incredibly slow (even with contract stuff turned off).
It was very hard to debug, with gdb routinely giving up on the complex .o generated. It was a very verbose language that implemented all known features under the sun, a bit like Ada and Scala.
Eiffel didn't take off for some very solid reasons.
> The Eiffel compiler required four distinct sequential processes (note: not phases. I really mean that four programs needed to be run in sequence, and each of these four programs implemented who knows how many passes). Eiffel generated C++, huge executables, was incredibly slow (even with contract stuff turned off).
This was not an inherent problem of the language, but one of the implementation.
I know this because I wrote a compiler for an Eiffel dialect for my Ph.D. thesis and there have been other Eiffel compilers without these shortcomings.
Also, EiffelStudio – which seems to be what you are talking about –
generated C code, not C++ and had the option to generate an intermediate representation instead that could be mixed with compiled code.
> It was a very verbose language that implemented all known features under the sun, a bit like Ada and Scala.
It was actually pretty minimalist, not at all like Ada or Scala.
Code was verbose because (1) programmers were encouraged to annotate methods with contracts and (2) exactly because it was minimalist and didn't have a lot of alternative ways of expressing the same functionality.
Edit: There are two hard parts about writing an Eiffel compiler.
1. Getting the semantics of feature renaming during inheritance right. If you aren't careful, it's easy to introduce errors during this step.
2. Doing separate compilation for mutually dependent classes. This is where much of the complexity of the EiffelStudio compiler comes from. Java solved the problem by simply not allowing separate compilation in those cases. If class A has a member of type B and B has a member of type A, they need to be compiled together.
Everything else that's involved in writing an Eiffel compiler does not require more than than what you learn in your typical compiler course. (Though, obviously, experience will get you better results.)
>Also, EiffelStudio – which seems to be what you are talking about – generated C code, not C++ and had the option to generate an intermediate representation instead that could be mixed with compiled code.
I think that must be what I was referring to by my mention about melting or freezing in a nearby comment. Didn't remember the technical details. Interesting feature. Wonder if other languages have something like that.
That's EiffelStudio's Melting Ice stuff [1]. One of Bertrand Meyer's trademarks is his penchant for coming up with funny names for such features.
(Edit: Reading over it again, I realize that this may be read as criticism. To clarify, while I disagree with Meyer on some things, this isn't about disagreement. He is a teacher at heart – in fact, a very good teacher – and you can see some of his habits for keeping students engaged sometimes carry over into his writing.)
You don't see this approach really anymore, because these days it's so cheap to just generated unoptimized native code if compilation speed is an issue.
Obviously, if you have a JIT-compiler, then something similar happens internally in that some methods will be compiled, while others will be left in IR form (and sometimes, methods will even revert to using their IR). This was technology from back when CPU speed was still expressed in MHz.
OCaml had and still has both a bytecode and a native code format (for similar reason), but they cannot be mixed. OCaml's bytecode still sees use for (1) the bootstrap process, which allows you to build OCaml from scratch, (2) to run OCaml code on oddball architectures that the native compiler doesn't support, (3) to run code in the timetraveling debugger, and (4) when you need compact rather than fast executables.
OCaml's bytecode interpreter makes use of the fact that OCaml is statically typed and therefore doesn't incur nearly as much of a performance penalty as bytecode interpreters for dynamically typed languages.
>Obviously, if you have a JIT-compiler, then something similar happens internally in that some methods will be compiled, while others will be left in IR form (and sometimes, methods will even revert to using their IR).
I was going to say (in my parent comment to which you replied) that maybe that Eiffel's Melting Ice tech is something like Java JIT tech, but wasn't sure if they were the same thing.
Why on earth would one need to use gdb, given the nice graphical debugger of EiffelStudio?
Also it was way faster than code produced by Java, which I remind only got a JIT with version 1.3 and to this day only third party commercial JDKs do offer support for AOT to native code.
Java is also quite verbose. Verbose languages are quite good for large scale development.
On real life the majority of code is read not written, so I rather have it verbose and comprehensible than trying to make sense of a line full of hieroglyphs.
(Haven't worked much on Java recently, but did use it a fair amount some years ago.)
Isn't Java roughly equally verbose as Eiffel?
>that implemented all known features under the sun
At least based on looking at the Wikipedia article for Eiffel, it does not seem to have implemented functional programming, unless the article is out-of-date, or unless the agent mechanism supports FP somehow (not sure about that last bit).
>*
The Eiffel compiler required four distinct sequential processes (note: not phases. I really mean that four programs needed to be run in sequence, and each of these four programs implemented who knows how many passes). Eiffel generated C++, huge executables, was incredibly slow (even with contract stuff turned off).*
That sounds like unrelated to the semantics of the language.
>With Eiffel tools you got the IDE with interactive development, including a VM for rapid prototyping. Then you would use the AOT compiler (via compilation to C) to produce shippable binaries.
I seem to remember that there was some intermediate form of language translation too - something called melting or freezing or some such (might have been two different things). This was for EiffelStudio, probably, not generic Eiffel stuff.
>This is what I always kind of missed with Java and .NET, the previous generation of languages (Eiffel, Modula-3, Oberon(-2)) all had a mix of interactive development and AOT compilation for shipping release builds.
Interesting. Didn't know that that generation had both. As an aside, I had read a good description about Modula-2 (not -3) in a BYTE article (IIRC, years ago), and it seemed to me at the time that it was a good language. Having had a good amount of Pascal background before doing a lot of C, I was interested to try out Modula-2, but probably didn't have access to any compiler for it at the time. Later I read that Delphi's feature of units for modularity may have been inspired by Modula-2's modules, along with the concept of separate compilation.
The OOP features in Turbo Pascal are based on Object Pascal, which was designed at Apple for their Lisa and Mac OS systems programing language, with input from Niklaus Wirth.
Turbo Pascal was the reason why most PC developers never had too much love for Modula-2, because by Turbo Pascal 4.0 we had all the goodies from Modula-2 with case insensitive keywords.
Also, the compilers were more expensive than Turbo Pascal ones.
Incidentally Martin Odersky was the author of Turbo Modula-2, the short lived Modula-2 compiler sold by Borland.
I usually thank my technical school for having us go through Turbo Basic and Turbo Pascal before getting into Turbo C.
We were using Turbo Pascal 5.5 (I already knew 3.0 and 4.0) and 6.0 was still hot out of Borland's factory, which is why I've always been a fan of strong typed systems programming.
The key difference between Eiffel and Java generics is that Eiffel allows for covariance by design. Obviously, this is unsafe and runtime type checks had to catch these cases. The goal was originally to resolve this through additional static constraints [1], but in the end the downsides always outweighed the benefits.
In the end, Eiffel stuck with covariance and a type system that is technically unsound, because the simplicity of the type system outweighed any hypothetical gains from making it fully sound.
Practical problems with Eiffel's generics arise mostly from the fact that, like Java, constrained genericity inherently relies on nominal subtyping, which makes it less expressive than it could be.
Would you mind going into a bit more depth about Eiffel's generics? It's not obvious to me from Wikipedia how they differ from normal parametric polymorphism.
This made me wonder if there's been efforts to fork golang and add generics as a proof of concept? In theory it should be much easier than a few of the crazy things people have done with c over the years?
It would be an wasted effort given the community's high resistance to them, better spend those hours contributing to a project where users value the work given for free.
I'm not even a Go developer, I just played with it a bit a couple of years ago and used it for a small one-off internal API thing, and I can think of a dozen real-world use cases for generics off the top of my head.
* type-safe containers (linked lists, trees, etc.)
* higher order functions (map, reduce, filter, etc.)
* database adapters (i.e. something like `sql.NullColumn<T>` instead of half a dozen variations on `sql.NullWhatever`)
* 99% of functions that accept or return `interface{}`
I appreciate that they seem open to the idea for v2, but "the use case isn't clear" is absurdly out of touch with the community. Using `interface{}` to temporarily escape the type system and copy/pasting functions just to adjust the signature a bit were among the first go idioms I learned, and they both exist solely to work around the lack of generics.
It's generally the opinion of the Go community that map, reduce and filter are bad ideas due to how easily they are abused. A for loop gets the job done easily enough. If you've ever worked with data scientists working with Python, you'll quite often see them all chained together, probably with some other list comprehensions thrown in until it becomes one incomprehensible line.
If that's the case, then the Go community is wrong.
For loops do not get the job done easily enough. I've lost count of the number of times I've had to do contortions in order to count backwards inclusive down to zero with an unsigned int. With a proper iterator API, it's trivial.
Furthermore, for loops are a pain to optimize. They encourage use of indices everywhere, which results in heroic efforts needed to eliminate bounds checks, effort that is largely unnecessary with higher level iterators. Detecting the loop trip count is a pain, because the loop test is reevaluated over and over, and the syntax encourages complicated loop tests (for example, fetching the length of a vector over and over instead of caching it). For loop trip count detection is one of the major reasons why signed overflow is undefined in C, and it's a completely self-inflicted wound.
I'm generally of the opinion nowadays that adding a C-style for loop to a language is a design mistake.
It's also strange that a language that wants to encourage parallelism requires for loops, and specifies that they always run sequentially. Java 8 can put map/reduce with a thread pool in the standard library precisely because it doesn't use a for loop; imagine how tedious and repetitive the Go version would be: https://docs.oracle.com/javase/tutorial/collections/streams/...
> For loops do not get the job done easily enough.
There was a nice paper at this years POPL which (in my opinion) allows you to substantiate this claim.
The paper is "Stream Fusion to Completeness", by Oleg Kiselyov, Aggelos Biboudis, Nick Palladinos, and Yannis Smaragdakis.
The actual question in the paper is how to compile away stream operations to imperative programs using for/while loops and temporary variables.
On the other hand, if we look at the output of the compiler we can see how complex it is to express natural stream programs using only for/while loops.
For instance, even very simple combinations of map/fold/filter create while loops and make it difficult to detect the loop trip count afterwards (in fact, you need an analysis just to detect that the loop terminates).
If you combine several such streams with zip, the resulting code makes a nice entry for obfuscated code contests.
Finally, if you use flatMap the resulting control flow doesn't fit into a for/while loop at all...
So for/while loops are not simpler than stream processing primitives and unless you explicitly introduce temporaries (i.e., inline the definition of map/fold/etc.) you quickly end up with very complicated control flow.
This all depends on what "the job" being done well enough is. You can't use a research paper to refute the experience of the many programmers who successfully use for loops to get their work done. That's a statement about usability, not expressiveness.
If you want to show something else is easier to use, you'd have to do a user study, and even that's not going to be universally applicable since it depends on the previous experiences of the user population being tested. It's why these things tends to be debated as a matter of taste.
Yes, people can get crazy with inline anonymous function chaining/composition, and that can quickly get out of hand in terms of maintainability and readability, but deeply nested imperative loops is often much, much worse to debug and understand, because the intermediate steps are not nearly as explicit as in a functional chain/composition that simply takes data and returns data at every step.
Regardless, these are simply cases of people writing bad code, and nobody is claiming map/reduce/filter is a panacea for bad code.
Functional composition/chaining works best with small, well-named single purpose functions that compose/chain together into more complex functionality (with appropriate names at every non-trivial level of chaining/composition). You can't easily compose/chain imperative loops this way (at least not without wrapping them in functions that take data, and returns transformed data, by which point you might as well use map/reduce/filter to transform the data to begin with to get rid of the impedance mismatch).
> If you've ever worked with data scientists working with Python, you'll quite often see them all chained together, probably with some other list comprehensions thrown in until it becomes one incomprehensible line.
It's not incomprehensible, it's just phrased a different way from what you're used to.
A lot of programming boils down to mutating local or global state, by executing lines of code -- each of which mutating the contents of some variable somewhere -- in order, one at a time. But this is just one style of programming, and is not the only one. When using map, filter, reduce etc. we, instead, construct a path through which data flows, where each line modifies some aspect of the data, as it flows through the path. This is essentially what Functional Reactive Programming is, although you can get the same effect using a pure language and function composition.
> It's generally the opinion of the Go community that map, reduce and filter are bad ideas due to how easily they are abused.
There is a germ of truth here - sometimes list comprehensions make code harder to understand than if you just wrote an if statement or a loop - but you've overgeneralized. I don't think anyone on the core Go team seriously thinks that map and reduce are bad ideas when Google relies so heavily on MapReduce (and its successors).
"Ability to abuse" isn't a good criteria for language design. I've seen plenty of abuse of Go channels, but I'm not going to make the argument you should remove them.
Python's comprehensions are among the most powerful, easy to use, concise capabilities of any language I've used. You can address the abuse issue with coding guidelines: no more than 2 deep. Done.
>It's generally the opinion of the Go community that map, reduce and filter are bad ideas due to how easily they are abused.
How easily are they abused?
Because it's the opinion of the programming community, including the brightest programmers out there, that FP, and map, reduce and filter are totally fine and dandy.
Yeah, using a tuple for what clearly should be a disjoint union is infuriating. I really don't buy the gopher argument that 'it is hard to understand'.
> tbh if you program Go correctly, you use interface{} pretty rarely.
It's a bit like saying "if you program C correctly, you use (void *) pretty rarely.". that's not the case, Go maintainers themselves keep on adding API with interface {} everywhere. Are you saying they are not programming Go correctly?
The request for use cases in Go seems a bit like begging the question to me. Since Go doesn't have generics, anything designed in Go will necessarily take this into account and design around this lack. So it's relatively easy to show that Go doesn't have a compelling use case for generics, since the designs implemented in Go wouldn't (usually) benefit from generics!
Rust has generics and traits/typeclasses, and the result is my Rust code uses those features extensively and the presence of those features greatly influences the designs. Similarly, when I write Java, I design with inheritance in mind. I would have trouble showing real world use cases for inheritance in my Rust code, because Rust doesn't have inheritance and so the designs don't use inheritance-based patterns.
Essentially, how does one provide real world evidence for the utility of something that's only hypothetical? You can't write working programs in hypothetical Go 2-with-generics, so examples are always going to be hypothetical or drawn from other languages where those tools do exist.
And if you look at C you'll see that interfaces and struct methods are also unnecessary, GC is also unnecessary, bound-checked arrays are also unnecessary.
The question is, do you want to write type safe code? which is memory safe? which has bound-checked array? or not? Assembly makes all that stuff unnecessary as well. This is not a good argument, especially when Go std lib is getting all these type unsafe API using interface {} everywhere. That is precisely what generics are for, to allow writing parametric functions (sort) or type safe containers instead of sync.Map like in the std lib.
If you care about type safety and code re-use then generic programming is a necessity. What do you think append, copy or delete are? these are generic functions. All people are asking is the ability to define their own in a type safe fashion.
Are these use cases Russ Cox don't know they exist?
What's unsafe about using one of the C libraries that do bounds checking, etc?
The argument about assembly is a red herring. Let's compare:
1) In addition to writing your business functions, learn these additional control structures to obtain safety
2) When writing your business functions, also use these well reviewed functions that enforce type and memory safety when moving code across interfaces.
3) Port all of your code to assembly, write you own memory safety control structures from scratch.
You see the difference? The choice in your mind between "add control structures to the language" and "do everything painstakingly by hand" but we are advocating a third option, which is: use well reviewed libraries written with only the basic control structures.
The reason I am advocating that is the more control structures you have the harder it is to analyze code. You end up slowly moving your codebase to a point where only people with deep deep knowledge of an advanced programming language can read it.
I think culturally, in 2017, programmers underestimamte how much can be done with just functions and literals and well written helpers.
The reason for that is we are rewarded (psychologically and professionally) for learning new control structures, but not as often for writing better code using the beginner structures.
It only suggests you can't easily give an example because the language is forcing a design where such things aren't needed. Sort of like linguistic relativity.
Which still proves the parent's point: it's not necessary. The question is what absolutely can't be done without them (probably nothing) so a better question is how much design/engineering/test time could be saved with them?
On the latter part I'm fairly cynical these days, since I'm presently on a team where the lead embraced a Scala-DSL heavy, functional design and the net result has been a total loss of project velocity because when we need to prototype a new requirement, the push back has been paraphrasing "oh, could you not do that in <core product> and handle it somewhere else?" - so we end up with a bunch of shell scripts cobbled together to do so.
If I were in the "go community" I would be pretty annoyed by this quote. I would find it dismissive of the literal years of people pointing at areas of their code that are bloated and less safe due to not having generics.
It doesn't seem to me that there's a shortage of people pointing out real-world use cases at all, and I'm looking from the outside in.
As a maintainer, you're bombarded with reasonable requests (I think around 10 a day on the Go project). Part of your job is to turn down hundreds of these and pick the few that benefit the most people from a diverse set of users, and also don't break anything or extend the api surface too much. Then whatever you choose people complain vociferously. Sometimes good requests get ignored in that noise.
Choosing is hard, and while they could improve I think the Go maintners have done a pretty good job, and are willing to admit mistakes.
Admitting a mistake on "generics" would probably go a long way towards credibility for the team.
No matter how many excellent decisions they've made that have delicately balanced opinions and tradeoffs, this one hasn't gone over well, and it's very widely known.
I haven't programmed in Go, but from what I understand, Go's explicit error handling isn't enforced by the type system, as you can leave off an `if err { ... }` check and everything still compiles. I think adding a generic result/error type (like the Result type in Rust or the Either type in Haskell) would be a pretty useful feature, as currently the error handling sits in kind of a weird place between forced handling and traditional exceptions.
> The Result type forces you to at least actively dispose of the error.
Sort of: by default, an unused value of the `Result` type produces a compiler warning. Like all warnings in Rust, this category of warnings can be promoted to a hard error for all users with a single line of code. And unlike other languages where compiler warnings are uniformly ignored, there is a strong culture in Rust of making sure that code compiles warning-free (largely due to the fact that the compiler is rather selective about what to issue warnings about, to help avoid useless false positives and warning spew (there is a separate project, called "clippy", for issuing more extensive warnings)).
I think you're thinking of Result<(), E>, whereas they're thinking of Result<T, E>, that is, you only get that warning if you don't care about the Ok value of the Result. If you need the value, you also need to deal with the error in some fashion.
This seems like a no-op to me. My stack of code tools always warns me of unhandled errors (insert argument about maybe the compiler should be doing it I guess?) but I've never understood how a Result type provides any real benefit over the usual error-tuple Go uses.
In both cases I have to either write a handler immediately after the call, or I'm going to fail and toss it back up to whoever called me.
Errors-should-be-values has always seemed like a bizarre statement to me. Like, fine, but they're still errors - their core feature will be that I stop what I want to do and handle them. And in that regard I much prefer exceptions (at least Python style exceptions) because most of the time I just want to throw the relevant context information up to some level of code which knows enough to make a decision on handling them. The thing I really want is an easy way to know all the ways something can fail, and what information I'm likely to get from that, so I can figure out how I want to handle them.
> In both cases I have to either write a handler immediaitely after the call, or I'm going to fail and toss it back up to whoever called me.
The main difference as I see it is that in Rust, you can't get the success value if there's an error, so you're forced by the compiler to handle any error if there is one. With tuples, you're free to just look at one element regardless of the value of the other.
In order to maintain compatibility with other interfaces, they have to return error in several places. Take bytes.Buffer, which return errors even when they are impossible. Same with http.Cookie(). It promises it will only ever be http.ErrNoCookie, but that isn't going to make static analysis tools happy.
Go's errors are really nice (if a bit repetitive) in my opinion. Type signatures enforce that you accept and acknowledge an error returned from an invocation, but leave it up to you to handle, pass along, or silently swallow.
Yes, it's somewhat similar in that the compiler checks if you have handled or chosen not to handle an error. You're not allowed to ignore it (through omission).
Declaring that every method throws Exception doesn't break the world any more than every Go function returning an argument of type error. You're intentionally saying pretty much anything could go wrong and nobody can plan ways to recover.
If they want to "learn" about generics perhaps they can read the literature of the past 30yrs and look at how other languages have adopted those learnings: Java, C#, Haskell, OCaml and Coq.
Look, even allowing type aliases to become part of an interface declaration would be a HUGE win. You can't currently write portable arbitrary data structures without reimplementing them with a code generator. Ugh!
> If they want to "learn" about generics perhaps they can read the literature of the past 30yrs and look at how other languages have adopted those learnings: Java, C#, Haskell, OCaml and Coq.
Yeah, I find it strange how languages are trending at a glacially slow pace to having the same features that strongly typed functional programming languages have had for literally decades. It's like we're going to be using actual functional programming languages eventually but the only way it'll happen is to very slowly change what everyone is currently using in small steps.
Static types, immutability, type inference and non-null variables are popular design choices right now but they've been in functional programming languages for nearly 50 years. I'm still waiting for inductive types and pattern matching to turn up in a mainstream language and seeing them talked about like they're new concepts.
That's not exactly surprising, the C# community has been doing that since the beginning of the language, anything not in the language is pointless academic wankery, and as soon as Microsoft announces it it's the best innovation in computing history since Microsoft was created.
Source: got to interact with the community between the C# 1.0 and 4.0 releases (2.0 added generics, 3.0 added lambdas, neither feature was considered of any use to a Productive Developer up to the day when Microsoft officially announced them).
> That's not exactly surprising, the C# community has been doing that since the beginning of the language, anything not in the language is pointless academic wankery, and as soon as Microsoft announces it it's the best innovation in computing history since Microsoft was created.
That isn't true inside Microsoft. Many of the people who work on C# are the same academic wanks that work on Scala or F#. C# has a different user base from those languages though, so they still have to be careful what they add to the language, and many language features are planned 3 or 4 versions in advance.
No, that was not intended as included in "the C# community". Hell, SPJ used to work at Microsoft (he may still do, but he used to).
> the people who work on C# are the same academic wanks that work on Scala or F#
I'm sure you mean wonks, but I liked the typo.
> C# has a different user base from those languages though, so they still have to be careful what they add to the language, and many language features are planned 3 or 4 versions in advance.
I have no issue with the evolution of C# rate or otherwise, only with a number of its users.
C# has a specific audience, and the language designers cater to them pretty well. I really really like C# as a language, and I don't mind delayed access to certain features that I already like from other languages.
You probably have a beef with some C# users not because of their choice of language, but with the field they work in (primarily enterprise) tends to breed a certain kind of attitude that other techies don't like very much.
It is new to C#, which is slowly catching up to Scala and F# in that regards. Mads Torgesen is good friends with Martin Odersky, in fact, when I first met Mads back in 2006 or so, they were talking about adding pattern matching to C#. C# is a much more conservative language, and it makes sense it would take a while to add.
There are good reasons to use C#, so when it gets a new feature that other languages have had for years, well, it is newsworthy.
There is a paper, Pizza into Java: Translating theory into practice, from Odersky and Wadler at POPL 97 about how to mix generics, lambda and pattern matching in a Java like language.
I think the reason for that enthusiasm is not so much that it's the new hotness (although it is some people's first encounter with the idea), but that it's now available in a mainstream language that their employer will actually let them use (in about five years when they finally bother upgrading Visual Studio).
strange how languages are trending at a glacially slow pace
Human behaviour is strange when you expect rationality. Sour grapes is such a pervasive cognitive bias that one has to wonder why it exists since it's obviously irrational. I think it's likely that it presents a major advantage in the psychology of group cohesion.
> perhaps they can read the literature of the past 30yrs
40, nearing on 45:
> Milner, R., Morris, L., Newey, M. "A Logic for Computable Functions with reflexive and polymorphic types", Proc. Conference on Proving and Improving Programs, Arc-et-Senans (1975)
The basic tradeoff is monomorphization (code duplication) vs. boxing (in Go speak, interface{}). The problem with saying "well, this is a tradeoff and both sides have downsides, so we won't do anything" is that the tradeoff still exists—it's just something that manually has to be done by the programmer instead of something that the compiler can do. In Go, the monomorphization approach is done with code duplication (either manually or with go generate), while the boxing approach is done with interface{}. Adding generics to the language just automates one or both of the approaches.
The idea that the monomorphization/boxing tradeoff can be dodged by not having generics in the language is a fallacy. In Go, today, you as a programmer already have to decide which side of the tradeoff you want every time you write something that could be generic. It's just that the compiler doesn't help you at all.
> The basic tradeoff is monomorphization (code duplication) vs. boxing (in Go speak, interface{}).
And even that is not the whole story, just because you reify generics does not mean you have to monomorphise everything, Microsoft's C# compiler only monomorphises value types.
Does that mean writing a separate version of an algorithm for each type or types that you want it to handle? Like a sort for ints, a sort for floats, etc., even if the logic is otherwise the same (or almost)?
Not a PL design or theory expert, just interested.
In Haskell (and I believe Scala), one can use pragma hints to specify to the compiler when to specialise, if performance becomes a problem. So I don't see this as an argument against parametric polymorphism.
> The basic tradeoff is monomorphization (code duplication) vs. boxing (in Go speak, interface{}).
There is also the approach taken by Swift, where values of generic type are unboxed in memory, and reified type metadata is passed out of band. There's no mandatory monomorphization.
Really by "boxing" I mean "dictionary passing" (I've also seen the term "intensional type analysis" for it). That encompasses approaches like Java as well as those of Swift.
Hybrid approaches in which the compiler (or JIT!) decides whether to monomorphize or use vtables are also possible.
"Dictionary passing" is a good term for this implementation strategy, I haven't heard it named before. Do you know of any other languages that take a similar approach?
Sure, but it still does basically the same thing of passing around witness tables. Values happen to have a uniform representation, but this seems seems like variation within a cluster of languages that do similar thing rather than something fundamentally new.
The Swift approach shares many of the same characteristics and trade-offs as "true" (aka Java-style) boxing. Of course, it does avoid some downsides of that, but also brings along some additional ones.
I think the main difference is that the Swift approach ends up being more memory-efficient, eg consider Swift's Array<Int> is stored as an array of integer values compared to a Java's ArrayList<Integer> where each element is boxed.
Also the Swift optimizer can clone and specialize generic functions within a module, eliminating the overhead of indirection through reified metadata.
Yes, there's some pitfalls of Java that Swift avoids, like compulsory allocation, but it fundamentally has the same style of indirection (often pointers to stack memory rather than heap, but still pointers), type erasure and pervasive virtual calls, and brings with it even more virtual calls due to needing to go through the value witness table to touch any value (which might be optimised out... or might not, especally with widespread resilience).
The compiler specializing/monomorphizing as an optimisation shows that Swift has a hybrid approach that tries to balance the trade-offs of both (like many other languages, like Haskell, and, with explicit programmer direction, Rust), but the two schemes still fall into the broad categories of monomorphizing vs dictionary passing, not something fundamentally different.
As usual, it lacks discussion about generics in:
Eiffel
Ada
Modula-3
D
MLton
.NET
Rust
Haskell
OCaml
Sather
BETA
CLU
Oh wait, they're pointing out that none of these is even remotely being discussed in the article hinting that they did not, in fact, and much prefer taking down strawmen they built from some sort of daguerreotypes of Java and C++.
> They just weren't ready to make any of the significant tradeoffs other languages have to do in order to support them.
Ah yes, the ever so convenient but never actually clarified "tradeoffs", which only ever happen for generics but oddly enough apparently don't happen for any other feature.
> there's definitely a lot of research in that area.
That there is, would be nice if Go's designers ever bothered actually using it.
Nitpick, but the idea is even older, going back at least to Girard (1971) and Reynolds (1974). :)
I don't know exactly what the problem is for Go. There are tradeoffs, e.g., just with the type system: impredicative, stratified or predicative quantification, implicit subtyping or explicit type instantiation, value restrictions, variance handling for mutable inductive types, Hindley-Milner or bidirectional typechecking, etc. There are more tradeoffs with the implementation. Fortunately, these are all well understood by now.
However, it is also true that many mainstream languages famously got generics wrong. What's most infurating about this situation is that a lot of research just gets ignored. If the question is really "should Go have generics whose design is based on C++ templates and/or Java generics" then the only sane answer is a resounding no.
I think even more damning then "these should be generic" is that there should be more of them. The real problem is the lack of generics inhibits the introduction of more data structures because they are hard to use.
It is true that arrays and maps/hashes take you a long way, but there's a lot of other useful data structures in the world. And despite the fact they may look easy at first, data structure code tends to be notoriously difficult to write both correctly and with high performance.
They aren't because the prevailing attitude is "what do you need generics for? Just do a cast!"
The issue is the Go developers are unwilling to consider generics until someone can come up with a problem that absolutely, positively cannot be solved without generics. And of course there isn't one.
Agree- having map, filter, reduce with strong type guarantees will be a huge boon. Imagine being able to map over a collection and the code hints will understand the new output types in the chain. This is something you cannot have when your higher order ops are all typed to Interface {}.
Unfortunately, Go is a language whose original creator is of the opinion that map/filter/reduce are just ways to make your program "fit on one line"[1], and that you should just use a for loop[2].
The ## operator is an ANSI C feature, so not really "early". In pre-ANSI preprocessors you had to abuse the lexer to paste tokens, e.g. by relying on // being deleted - nowadays comments are replaced by space.
>This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues.
It seems to me to be the same "Generics is some foreign concept we just heard of, and we're not sure how to implement them in Go yet, but we'll consider it if we chance upon some magical optimal way" that has been the official party line of Go since forever...
Even the "we are asking you for proposals" part has been there since a decade or so...
> This is a much more nuanced position than the Go team has expressed in the past, which amounted to "fuck generics," but it puts the onus on the community to come up with a set of scenarios where generics could solve significant issues.
Nah, this is pretty much the same answer they've been giving all along, lots of handwaving about how generics are so cutting edge that no one can figure out how to use them effectively yet ("we just can't wrap our heads around this crazy idea!")
It strikes me as a pretty odd statement, though. I'll admit that I'm not very familiar with Go, but surely you can look back at the benefits of generics in other languages? Go is hardly so specialised that you can't learn a single lesson from elsewhere.
I agree, I didn't find the statement refreshing at all, I found it insulting. You damn well know the use-cases for generics. If you still don't like it; fine, say that explicitly. Don't pussyfoot around it and play the "we just don't know that much about it yet" lie.
Why does an ORM need generics? I ask because I've built something very like an ORM in Go and I didn't have any problem without generics. ADTs on the other hand...
This code (okay I made the use line up because it's not on the website and I'm lazy, you do need one though):
use some::stuff::for::the::dsl;
let versions = Version::belonging_to(krate)
.select(id)
.order(num.desc())
.limit(5);
let downloads = version_downloads
.filter(date.gt(now - 90.days()))
.filter(version_id.eq(any(versions)))
.order(date)
.load::<Download>(&conn)?;
is completely, statically typed, with zero runtime overhead. Generics + type inference makes sure that everything is valid, and if you screw it up, you get a compiler error (which honestly are not very nice at the moment).
Thanks to generics, all of this checking is computed at compile time. The end resulting code is the equivalent of
let results = handle.query("SELECT version_downloads.*
WHERE date > (NOW() - '90 days')
AND version_id = ANY(
SELECT id FROM versions
WHERE crate_id = 1
ORDER BY num DESC
LIMIT 5
)
ORDER BY date");
but you get a ton of checking at compile time. It can even, on the far end of things, connect to your database and ensure things like "the version_downloads table exists, the versions table exists, it has a crate_id column, it has a num column", etc.
You can absolutely create an ORM _without_ generics, but it cannot give you the same level of guarantees at compile time, with no overhead.
Ah, I think the bit where generics is useful is in making sure the value types are consistent (i.e., that you're not building a query that subtracts an int from a string or compares a date to a bool). This still doesn't guarantee that the types will match with the database columns, but it is a step up from the non-generic alternative.
How does the compiler know the types of the database columns?
That information has to come from somewhere. Also, type checking is negligible from a performance perspective, so I the "zero overhead" is of minimal interest.
> How does the compiler know the types of the database columns?
There's two variants on a single way: basically, a "schema.rs" file contains your schema. You can write it out, and (with the help of migrations) it will update it when you create a new migration, or, you can have it literally connect to the DB at compile time and introspect the schema.
> Also, type checking is negligible from a performance perspective, so I the "zero overhead" is of minimal interest.
Ah, but it's not "the type checking is fast", it's the "diesel can generate hyper-optimized outputs that don't do runtime checks". Like, as an interesting example, Diesel can (well, will be, this has been designed but not implemented yet) actually be _faster_ than a literal SQL statement. Why? The SQL literal is in a string. That means, at runtime, your DB library has to parse the string, check that it's correct, convert it to the databases' wire format, and then ship it off. Thanks to strong generics, since the statements are checked at compile time, none of those steps need to be done at runtime; the compiler can pre-compile that DSL into postgres' wire format directly.
The reason this hasn't actually been implemented yet is that it's admittedly a tiny part of the overall time, and the Diesel maintainers have more pressing work to do. I use it as an example because it's an easier one to understand than some of the other shenanigans that Diesel does internally. It couldn't remove this kind of run-time overhead without generics.
This is all very neat, but I think Go could do all of this too, except that "query compile time" would happen on application init or on the first query or some such. Still, very cool and good work to the diesel team!
The whole point of an ideal type system is that if code compiles, it is provably correct and won’t have bugs.
Few typesystems come close to that (Coq is one of the few languages where the system might be advanced enough), but generally you want to completely remove entire classes of errors.
Yes, because Go couldn’t do all that on compile, which is the entire issue. It can’t prove the type safety of your SQL query at compile time, or the type safety of your generics usage.
I think you meant to respond to Steve. I'm not a Rust guru (I keep trying it, but I don't have any applications for which it's reasonable to trade development time for extreme performance). I definitely don't know anything about diesel.
"diesel migraiton run" will run all pending migrations, there's revert/redo as well. "diesel migration generate" can generate these files and directories for you; it doesn't yet (that I know of) have helpers to write the SQL though (like Rails does for its migrations).
I guess I don't see how generics would help you reduce the number of insert/update functions. The basic problem of an ORM is to map struct fields to columns; I don't see how generics would help you here. Can you write the generic pseudocode you want to write?
The idea of needing to come up with use cases for generics is baffling. The existence of generics in numerous other languages already support of plethora of use cases. I really don't get that statement at all.
Because generics have been around for decades, and anyone who can't be bothered to look into the use cases for a feature that spans numerous languages over that time period doesn't deserve the time it takes someone else to spoon feed this information to them. This goes for the author of the article as well.
I think this is a fair question. We have a lot of pseudo-intellectuals here that think 90% of their job isn't writing business functions. Not having generics in Go has not hindered me at all in performing the objectives of my business. Everyone wants generics, no one knows why. When I had generics in my previous two roles where I used Java and C# respectively, I can count on two fingers the number of times I needed them, and once was because a library forced me to.
This is such a troll'ish statement, but I'll respond anyways because maybe you actually are just that uninformed. Without generics any number of libraries that utilize generic function blocks - Func [1] in C#, Callable [2] in Java, etc., and do things like return the function's type parameter as a method result, would not be possible. This is exceedingly common, at least in libraries. If you want to know how common, let me refer you to my friend http://google.com.
Just because something isn't valuable to you, personally, doesn't mean it's not valuable. As in all aspects of life, not everyone is you.
I am another developer who has never really suffered as a result of a lack of generics. Personally I really like how dead easy golang code is to read and understand. God forbid golang become a language where untold amounts of logic can be packed into a single line of code, resulting in near mathematical levels of complexity which always require more time to understand. Like most complex concepts in life, I suspect the number of developers who can effectively use these tools is rather small compared to the number of people who think they "need" them. But hey, I'm a vim programmer who doesn't like to have to rely on a bloated IDE to be able to deal with the code that I am interacting with so I might be a minority.
The question is _why_ do you need generics, for what use case(s), etc? Saying “I need generics otherwise I won’t use Go” is exactly the type of feedback they don’t want.
It seems like you’d have valuable feedback given that it’s a “showstopper” for you.
Not the GP, but you can think about generics as a better kind of interfaces :
- performance wise: no dynamic dispatch, means no virtual function call, means faster code.
- type safety: let say I want a data structure that can store anything but everything in it must be the same type. You just can't do that with Go's current type system without introspection (which is cumbersome and really slow).
Generics already exists in Go : maps, slices and channels and generic, and it was mandatory for performance reason. The problem of not having generics is that the community can't build its own implementation of useful data structures : for instance, until the last release Go lacked a thread-safe map. That's fine, let's use a library for that … Nope, can't implement that because, “no generics”.
Generics are rarely useful in your own code, but they allow people to create good abstractions in their libraries. Without generics, the ecosystem growth is limited.
> you can think about generics as a better kind of interfaces
No, you can't, because generics don't provide the key feature of interfaces: run-time dynamism.
You could make a case that static polymorphism and dynamic polymorphism could be accomplished by a single mechanism, as traits do in Rust, but you can't say generics are "better" than interfaces, since they solve a different set of problems, with different implementation strategies, despite related theoretical foundations.
> The problem of not having generics is that the community can't build its own implementation of useful data structures
This is not true either, although it is true that it is harder to do this sort of thing in Go, the result may in fact not be as fast as you can achieve with monomorphisation, and you may have to write more code by hand.
The "trick" is the Go model is to provide an interface (which yes, potentially unnecessarily uses a dynamic mechanism) to implement indirect operations. For example the Swap function of https://golang.org/pkg/sort/#Interface enables https://golang.org/pkg/container/heap/#Interface - which is a reusable data structure. Compiler-level static-generics for slices and arrays, combined with indirecting through array indexes, enables you to avoid boxing when instantiating the heap interface. The data structure can be instantiated with an interface once, so there is no dynamic lookup for heap operations, but there is still a double indirect call.
Yes, this is less convenient than proper generics, but this doesn't mean you can't do these things.
Furthermore, I've never actually needed to do this outside of sorting (which is, absolutely, is annoying) in large projects. Almost every time I need a specialized data structure, I usually need it for performance reasons, and so hand-specializing for my needs tends to be worthwhile anyway.
> Generics are rarely useful in your own code, but they allow people to create good abstractions in their libraries.
I'd like to see something like templates or generics for Go, but I definitely don't want Java-style "fake" generics without code specialization. Furthermore, I found that the vast majority of generics-heavy libraries back in my C# days were more trouble than they are worth, despite meaningful opportunities for better performance with value types. Copy/paste almost always resulted in a better outcome.
i almost feel this is like asking bjarne what use cases there are for 'classes' in c-with-classes. it's a structural language change that (depending on implementation details) can have sweeping ramifications on how one writes code.
type parameterization allows for the expression of algorithms, idioms and patterns wholly or partially separate from concrete type details. i'm sure others can better sell the topic though.
Do you not see how this is an insulting question? You know very well the use cases for generics. You do not need users to present new ones. You can literally Google "generics use cases" and get hundreds of thousands of results that directly answer that question.
I think the problem everyone has swallowing that ask is that the value of generics is that it's so widely taught, with so many use cases, and so much literature (academic and otherwise) that the suggestion that they need use cases is laughable.
What use cases do they need other than the extremely large body of public knowledge on the matter, and why would one more example change anyone's mind?
To me this represents the epitome of the insular and outwardly hostile attitude that the Go team has toward good ideas elsewhere in the CS world. Would it really make a difference to the team if I or anyone else were to hunt down specific examples and present them with problems solved with generics and stronger type systems?
Asking for use cases when the use cases and benefits are already well-known seems pretty disingenuous to me. (Of both you and the blog post.)
... but, hey, just to indulge you, a few off the top of my head:
- Generic containers in libraries (aka not built-in)
- Parametricity to restrict user-supplied implementations of interfaces; not quite as valuable in a language with casts, etc., but still reasonably valuable.
- To give a sound account for the currently-magic built-in types.
That should be enough, frankly, but I'm sure I could come up with more if I were to spend 5 more minutes on it...
Can we stop pretending that they don't know of any compelling use cases now?
There are many different ways to provide generics (templates, typeclasses, etc.), each with their own pros and cons. It's not simply a matter of "add generics". And what solution they do come up with is going to bring the given cons those who would be better served by a different generics solution.
The Go team have been long criticized for choosing the option that fits Google, but not the rest of the world. This seems like their attempt to think about what others are doing with the language, beyond their insular experience, so they don't end up with something that fits Google perfectly but falls apart everywhere else.
If they don't take the time to learn how people intend to use generics in Go, the best solution for Google, and Google alone, is what we will get.
It's not unreasonable to ask "how do we implement this in the best way?", but I'll note a) that's not what was asked, and b) I have a hard time believing that code at Google is so incredibly "special" that they need a special kind of generics. Also, if Google is such a unique snowflake why not ask the Google people directly rather than the "Go community"?
They asked for use cases from a wide range of people to ensure they implement it in the best way. Subtly different, but essentially the same request.
> I have a hard time believing that code at Google is so incredibly "special" that they need a special kind of generics.
I didn't say they need special generics. I said the approach that works best at Google may not be the best approach for the population at large. If Google is best served by C++-style generics, while the community as a whole would be better served by Haskell-style generics, why just jump in and do it the C++ way before seeing how others might want to use it?
> Also, if Google is such a unique snowflake why not ask the Google people directly rather than the "Go community"?
Because they are trying to avoid the mistake of using one datapoint like Go has struggled with in the past? They know what works in Google, but that doesn't necessarily work for everyone else. See: Package dependencies, among many.
> They asked for use cases from a wide range of people to ensure they implement it in the best way. Subtly different, but essentially the same request.
Phrasing is important, and obviously (from the reactions of me and others in the thread) the phrasing was way off and perceived as condescending and lazy.
> I didn't say they need special generics. I said the approach that works best at Google may not be the best approach for the population at large. If Google is best served by C++-style generics, while the community as a whole would be better served by Haskell-style generics, why just jump in and do it the C++ way before seeing how others might want to use it?
OK, so you said they don't need a special generics, but then say that they do. I must not be understanding what you're trying to say. (If this is about choosing trade-offs, then see the other poster who talked about trade-offs. Executive summary: Not doing generics is also a trade-off.)
Also: ASK GOOGLE.
> Because they are trying to avoid the mistake of using one datapoint like Go has struggled with in the past? They know what works in Google, but that doesn't necessarily work for everyone else. See: Package dependencies, among many.
You can't have it both ways. Either Google is important enough to it in a way that works for them, or the community is more important and they get to choose.
Anyway, I'm done with this conversation. I think we may be seeing this from viewpoints that are so different that it's pointless to continue.
I'm not sure how to argue constructively with someone who says "I'm not saying X, but..." and then immediately states a rephrasing of "X". I'm sure that's not what you think you are doing, but that's the way I'm seeing it, FWIW.
> Phrasing is important, and obviously (from the reactions of me and others in the thread) the phrasing was way off and perceived as condescending and lazy.
Maybe, but I don't know that we should be attacking someone's poor communication ability. I'm sure I've misunderstood at least one of your points too. Let's just focus on what was actually asked for: Use-case examples.
> OK, so you said they don't need a special generics, but then say that they do.
There isn't an all encompassing 'generics'. Generics is a broad category of different ways to achieve reusable statements across varying types, in a type-safe manner. To try and draw an analogy, it is kind of like functional and imperative programming. Both achieve the function of providing a way to write programs, but the paradigms differ greatly. Each with their own pros and cons.
If imperative programming is the best choice for Google, that doesn't mean the community wouldn't be better served by functional programming, so to speak. And when it comes to generics, there are quite a few different paradigms that can be used, and not easily mixed-and-matched. They are asking for use-cases to determine which generics paradigm fits not only the needs at Google, but the needs everywhere.
> Also: ASK GOOGLE.
The Go team is Google. They have asked Google. Now they are asking everyone else. I'm not sure how to make this any more clear.
> Either Google is important enough to it in a way that works for them, or the community is more important and they get to choose.
In the past Google was seen as the most important, and they have been widely criticized for it. This is them moving in a direction that favours the community. And they are still being criticized for it... Funny how that works.
> There isn't an all encompassing 'generics'. Generics is a broad category of different ways to achieve reusable statements across varying types, in a type-safe manner. To try and draw an analogy, it is kind of like functional and imperative programming. Both achieve the function of providing a way to write programs, but the paradigms differ greatly. Each with their own pros and cons.
Yes, thank you. Everybody in this thread already knows that. PICK ONE.
(EDIT: I should also add: Since Go is structually typed and has mutable variables, that should be a good indication of what to do and what not to do. See e.g. Java arrays, variance and covarince.)
> If imperative programming is the best choice for Google, that doesn't mean the community wouldn't be better served by functional programming, so to speak. And when it comes to generics, there are quite a few different paradigms that can be used, and not easily mixed-and-matched. They are asking for use-cases to determine which generics paradigm fits not only the needs at Google, but the needs everywhere.
And now you're trying to bring imperative vs. functional into this?
I think IHBT... and this really is my last comment in this thread. Have a good $WHATEVER.
> Yes, thank you. Everybody in this thread already knows that. PICK ONE.
They are picking one, based on the use-cases that will be given to them in the near future. I don't understand what your point is here.
> Since Go is structually typed and has mutable variables, that should be a good indication of what to do and what not to do.
That may be true, but the worst case scenario is that they learn nothing from the examples they get. You don't have to provide any if you feel it is a useless endeavour. If it gives them comfort, so be it.
> And now you're trying to bring imperative vs. functional into this?
It wasn't clear if you understood what I meant by there being different ways to provide generics. I struggled to write it as eloquently as I had hoped and the analogy was meant to bridge what gaps may have existed.
It's almost like communication can be difficult. Where have I see that before? Hmm...
The problem is that the community did indulge the author and the other Go maintainers 5 years ago and their was an apparent refusal to see lack-of-generics as an issue, almost as though Turing-completeness was sufficient justification to not include them.
"Find us examples and show us so we can ponder this further" is incredibly condescending after we did that 5 years ago and they decided to stall (or to use the author's term, "wait") on the issue. Honestly I think it might be too late for generics to be added, because the large body of existing interfaces aren't generic and likely would have a very high transition cost.
(A bit late to your comment, but I found it valuable:)
It's actually a classic bulshitting tactic by people who have no actual argument. Effectively, it's "well, we'll defer a choice until we can PROVE that decision X is PERFECT in EVERY way."
That's not to say: IF there's a real, quantifiable doubt, then by all means, continue to discuss (but preferably set time limits), but to do that you MUST set out precisely what the problems are, what potential solutions could (and couldn't) be, etc. etc. Not just vague "this feels wrong" objections.
I remember when Java started to have generics in 1.5. It was very hard to go back to 1.4 after having used that. I feel that I just can't program without them any longer.
It's a small superset of Go that allows you to call map/filter/reduce methods on any slice. It's not true generics (you can't define your own generic functions or types) but it covers the most common use case for me: declarative data processing.
Conversely, I would stop using Go in my projects if it introduces generics.
If Go wants to be more sophisticated, there are other sophisticated languages I can use out there like Haskell, Rust, F#, Scala, etc. The problem I have learned in my career is "sophistication" usually gets in the way of getting things done because now you have all these clever developers full of hubris creating these novel implementations that nobody else in the world understands, and your project fails but hey -- at least you have a dope, terse, generic implementation -- totally rad.
So you'd rather take a half-dozen, half-assed implementations of templates, macros, flaky cast-to-interface things and a bunch of incompatible weirdness rathern than a singular, reasonably effective pattern that's been proven successful over literally hundreds of programming languages?
That's never really true, because using a language means using other people's libraries, and using libraries means debugging and extending them, so you eventually end up needing to work with anything the language can serve up.
This is my beef with JavaScriot these days... They introduced a slew of new, in my opinion insane, control structures, and while I can avoid them in my code, they are still slowly polluting my dependencies, and adding obscure runtime requirements for my projects.
I think this is too little, too late. Anyone who saw an advantage to using generic programming is already using something else. They've already invested resources in something besides Go; why spend the effort to switch back for what will, at best, probably be very mediocre support for parametricity compared to whatever they're already using?
b) any of those people you mention aren't using go for some things and other things where this feature is desired
c) that the feature will 'at best' be at level X
d) that level X won't be enough to satisfy some use cases
and a whole host of other things, not least of which the fact that go itself was invented to suit some purpose after other languages existed, and has been successful for many users
The opposite side of that coin is that other programming languages do already exist.
Any new language is already playing catch-up to reach the state of the art. Go is 10 years old and in those 10 years has made astonishingly little progress at implementing basic features such as error handling, because the designers cannot escape their bubble.
The most sensible thing to do is to kill Go, not to improve it. The root problem is not the lack of this or that feature, but the bizarre beliefs of the people running the project.
> The most sensible thing to do is to kill Go, not to improve it.
Care to expand this? It's pretty clear that you don't find Go suitable for your own use cases, but I've found it to be an extremely productive language, despite the fact that it may have a couple of warts.
More importantly though, looking at C++ I think it may be hard to come up with a generics system that doesn't lend itself to abuse and ridiculous mega constructs. I would love to see something that provides power in ways that disallow craziness (Boost Spirit kind) but provide enough power to avoid all the cases that suffer without generics.
The craziness is preferable to passing void* or Object or Interface{} everywhere. At that point, you might as well use a language without strong typing, for all the safety you're going to get. You can end up with just as much craziness with void* and Object and Interface{}.
The Go team needs to stop trying to control what people do with their language. If people come up with crazy implementations, it's their own problem. It's up to the Go community to provide alternatives to prevent those crazy implementations from becoming standardized in Go programming culture. Refusing to implement generics because programmers might abuse them is paternalistic and counterproductive.
"If people come up with crazy implementations, it's their own problem."
Some people are turned away from C++ after seeing random bits of crazy language/template misuse, although when used sanely C++ could've served them nicely for their job.
Those signatures seem complicated to me because there's a pile of arguments and some of them are functions with extra annotations, not because there's generics involved. The equivalent signatures using interface{} (or maybe something more precise) would be likely just as noisy, if not more so.
Both C# and Java have taken different approaches to constrained generics that are still really useful for about...I dunno, the 90% case (C# has reified generics and so it's more like 95%, but type erasure also has its pluses when you're off-roading around in a framework somewhere). Scala goes a little further and covers, like, the 99% case, and plenty of inadvisable ones too.
C++ doesn't have generics, it has templates, which are effectively a syntax-constrained form of macros. That's why it lends itself to abuse (or creative use, depending on your take on macros).
So C++ generates separate code for each template type and C# and Java do a typecheck at compile time and reuse the same codepath for all types at runtime?
.NET does a mixture of C++ and Java, meaning generic instatiations for value types get their own version of the code, instatiations for reference types get a shared version.
It's worth noting that in case of C#, it's really an implementation detail. It could just as well share all instantiations by boxing and adding a bunch of runtime checks - the language semantics wouldn't change because of it.
In C++, on the other hand, separate compilation of template for each instantiation is effectively part of language semantics, because it has numerous observable effects.
My more cynical reaction is that someone who doesn't understand the benefit of generics or can't even come up with examples where generics are superior to their alternative should not be allowed anywhere near a language design discussion.
Java`s generics have had issues due to use site variance, plus the language isn't expressive enough, leading its users into a corner where they start wishing for reified generics (although arguably it's a case of missing the forest from the trees).
But even so, even with all the shortcomings, once Java 5 was released people migrated to usage of generics, even if generics in Java are totally optional by design.
My guess to why that happens is that the extra type safety and expressivity is definitely worth it in a language and without generics that type system ends up staying in your way. I personally can tolerate many things, but not a language without generics.
You might as well use a dynamic language. Not Python of course, but something like Erlang would definitely fit the bill for Google's notion of "systems programming".
The Go designers are right to not want to introduce generics though, because if you don't plan for generics from the get go, you inevitably end up with a broken implementation due to backwards compatibility concerns, just like Java before it.
But just like Java before it, Go will have half-assed generics. It's inevitable.
Personally I'm sad because Google had an opportunity to introduce a better language, given their marketing muscle. New mainstream languages are in fact a rare event. They had an opportunity here to really improve the status quo. And we got Go, yay!
After some initial enthusiasm due to its gentle learning curve (actually, the Golang learning curve is nearly non-existent for an experienced programmer), I got sick of Go programming pretty quickly. Writing anything non-trivial will have you fighting against the limitations of the language (e.g., lack of generics which is particularly aggravating when trying to write a library).
I've mostly moved on to Clojure for programming backend services. I still use Go for the occasional small utility program due to its speed and ease of deployment, though.
I've programmed scores of libraries in Go and found it pleasant, in fact it's more pleasant writing a library in Go than in any other language I've used.
I've never once even considered the fact that I might need generics because I've never run into an unsolvable problem or an extremely inelegant engineering solution. I see a lot of people claiming that the lack of generics is a big problem but no one is actually showing a concrete case where generics are necessary.
C doesn't have generics but we never have this discussion when we talk about C.
If you have a []T and you want to apply a function to make it into []K, you need to explicitly make a loop in every case. You might not think that's that bad, but note that T and K could also be interfaces such that T is a superset of K. In that case Go, T can be used as K but []T cannot be used as []K (meaning you have to do an explicit loop for something that should be an implicit type conversion).
That is a trivial example where generics help. And yes, the endless boxing and runtime type assertion does make your code slower eventually (see the comment from the libcuckoo authors).
This is one of the reasons why the container and sort stdlb libraries are such a joke. They're not generic so they have to special case every different type. With generics, the entire sort package would just be a function that takes []Ordinal and sorts it. Container would similarly only require one implementation of each data structure (because they would be generic).
I find it hard to believe that you've programmed "scores of Go" and you've never hit any of these issues. I also have programmed plenty of Go, and these problems are just frustrating.
If that's true then you might not benefit from generics directly. But you will benefit from having an ecosystem of libraries that take advantage of generics. And even if you don't use other people's libraries, not everyone in the world happens to be as lucky as you and they do need generics to make their lives easier.
But again, I still severely doubt you've never run into a problem with Go's lack of generics. I ran into it after about 3 months of working with Go. Maybe you didn't find the work-arounds problematic, but that doesn't mean that the original limitation (no generics) doesn't cause problems. You just choose to accept the work-arounds as the "right way" of doing things.
If you haven't already, I would recommend looking at Rust or another modern language that has generics to see what sort of benefits you can get out of it that you can't directly get with Go. Personally the fact that if T implements an interface I won't let you use []T as []I is pretty silly IMO. Rust lets you do that with the From trait, and you can even do it even more generically with the From trait using a generic T.
Besides, C gets an excuse because the language was designed before generics were even invented, and with pre-processor macros is it possible to implement a poor man's generic system.
Similar background here, but I found Go refreshingly liberating because everything in the language makes sense and there isn't a lot of magic going on. I might write a bit more code but its crystal clear how it works and more importantly I can come back in 6 months and grok the code. In face, this is the first language i've used in 25 years of coding where I can read anyones code and grok it. go fmt might be the best part of go.
The point of Go 2.0, IIUC, is that it throws away backwards-compatibility guarantees, so if they want to do something in Go 2.0 that breaks correct go1 code to support generics, that's on the table. So backwards-compatibility nastiness is not guaranteed.
>The article mentions that Go 1 code needs to be able to coexist with Go 2 in the same codebase.
Interesting. Are there any other languages where that is allowed? I'm assuming that by "codebase" you mean all the code that gets compiled into one program or library (for compiled languages), or is part of one program or library (for interpreted languages). And I don't mean the case where only a common subset of Lang X v1 features and Lang X v2 features are used in the same codebase (because in that case, there is no issue). That latter case is possible, for example, in Python, and probably some many languages too. In fact by the definition of the case, it should be possible in all languages.
I would just read the article. Basically, the restriction they want to impose is to make sure that you can import v2 code into a v1 codebase and vice-versa with a v2 compiler. So that means any syntax breakage would need to be versioned or otherwise inferred, I guess.
My guess as to why people in Java use generics so much and switched to them so quickly is the squiggly yellow lines in their IDEs. A lot of Java code is easier to read/write with explicit and implicit casting and no use of generics. And the type erasure makes for sometimes unintuitive behavior at runtime. Being familiar with Java, I appreciate Go's skepticism toward generics because maybe just maybe they'll figure out a way to add them after the fact in an elegant way.
> Java`s generics have had issues due to use site variance
Use site variance can be really powerful when you're dealing with invariant data structures like Lists. You can then treat them like covariant or contravariant depending on a particular method call at hand.
I get that everyone would love to have a functional language that's eager by default with optional lazy constructs, great polymorphism, statically typed with inference, generics, great concurrency story, an efficient GC, that compiles quickly to self contained binaries with simple and effective tooling which takes only seconds to setup while giving you perfomance that equals java and can rival C, with a low memory footprint.
But, I don't know of one, and maybe that's because the Go team is right, some tradeoffs need to be made, and they did, and so Go is what it is. You can't add all the other great features you want and eat the Go cake too.
Disclaimer: I'm no language design expert. Just thinking this from the fact that I've yet to hear of such a language.
Honestly, no. For starters, I don't want a functional language (at least not a purely functional one). Purely functional programming has some distinct downsides. My preferences are firmly on the multiparadigm side, and I'm perfectly happy with a reasonable imperative language (that still supports higher-order functions and closures).
What I'm not happy with is language designers seemingly ignoring the state of the art even when it comes to picking low-hanging fruit.
First, I'm not looking for a language. Programming languages are part of my field. I was making an observation rather than expressing a need. If there's a language that's not totally obscure or esoteric, I've probably written some serious code in it at some time.
Second, I know and use Scala (among other languages) and have done so since the 1.x versions, but the JVM dependency is too often a problem (and Scala Native is not yet mature).
Multicore OCaml is in the works and is looking amazing, but is indeed a ways away. They're taking their time and doing it properly so things are fast and safe.
The thing is, until it's shipped, it's still vaporware for most people, especially if they want to use it in a commercial setting.
KDE 4 was released in 2008 and it was supposed to be ported to Windows. We're in 2017 and KDE 5 is still far from being considered stable on Windows. And I'm sure you can also think of many examples of programs promised and never delivered or underdelivered.
It was ported to Windows. I cannot check the remarks about stability because I use Linux, but it sounds plausible. There's only a handful (maybe 2 or 3) developers working on Windows support in their spare-time AFAIK. I would guess that KDE had anticipated more Windows developers joining the project as it progressed towards maturity.
That's always the problem with open-source projects: It's very hard to do planning and forecasting with a bunch of volunteers. Even if they commit to a roadmap, there will always be someone who has to step down because of private issues (new job, new child, etc.). Go is in a much better position since Google has headcount assigned to it (again, AFAIK).
It isn't a great concurrency story, but the shortcomings are also overstated. OCaml does message-passing concurrency just fine and allows for shared memory for arrays and matrices of scalars, which is good enough for most scenarios.
(Technically, there's also Ocamlnet's multicore library, but that's too low-level for most people.)
F# is pretty good there, especially if one uses a library like Hopac. However I think it's still not on the level of Go, since concurrent operations are represented through monadic types (like Job<T>), and so everything has one level of indirection. F# workflow builders ("do notation") makes that somewhat more bearable, but I still think it's harder to work with than just using normal function calls on normal lightweight threads, like available in Go.
it's a bummer that the windows story for ocaml is so sketchy. i'd like to use it for work (various utilities and whatnot) but the ecosystem seems to make a lot of assumptions about the system it runs on. (for me there's still haskell and rust at least.)
Has algebraic data types (however variants can't have generic parameters not present on the data type itself, but you can use trait objects to do that).
Functions that don't take &mut parameters, & parameters with types with interior mutability and don't call functions that do I/O are pure (although they may not terminate and may abort).
> optional lazy constructs
"Automatic" lazy costructs are usually unnecessary and thus almost never used, but can be implemented as a library using macros.
Can also just use closures and call them explicitly.
> great polymorphism
Has trait/typeclass polymorphism with either monomorphization or dynamic dispatch.
Structures can't be inherited, but composition gives equivalent results and is the proper way to do it.
There have been several proposals for inheriting structures though, so maybe it will be available at some point, despite not being advisable.
> statically typed with inference
Yep.
> generics
Yup.
Not higher kinded, no constant integer parameters and not variadic yet, but those features should be available in the future.
> great concurrency story
Rust is the only popular language supporting safe concurrency without giving up the ability to share memory between threads.
> an efficient GC
It turns out that a GC is unnecessary, and that reference counting is enough, which is what Rust provides (although it's still relatively rarely used).
If you insist on a tracing GC, there are/have been several attempts to add it, and maybe there will be a mature one in the future.
In particular, there is a "rust-gc" crate, and Servo has a working integration with the SpiderMonkey GC, which I suppose should be possible for others to use with some work.
> compiles quickly
Work in progress (incremental compilation, more parallelization).
> self contained binaries
Yes (or optionally using shared libraries).
> simple and effective tooling which takes only seconds to setup
rustup can install the compiler and package manager in one line.
If you want an IDE, you'll have to install that too separately. Work in progress on improving IDE functionality and ease of use.
> giving you perfomance that equals java and can rival C
Performance is at least theoretically the same as C since the generated assembly code is conceptually the same (other than array bounds checking, which can be opted out from with unsafe code).
As my sibling commentor says, they're the default. There are some details though:
Rust uses glibc by default, which must be dynamically linked. It's usually the only thing that's dynamic about Rust binaries, but it does mean that compiling on on old CentOS box is a decent idea if you want a wide range of compatibility.
Alternatively, you can use MUSL, which works like this:
The default is that all Rust code is statically linked, but since you may link C code as well, that may or may not be statically linked. It's done by packages, so there's no real default. Many of them default to static and provide the option of dynamic.
That's why there's not really a guide, that's really all there is to it.
They're the default, even when you include 100+ packages via Cargo. The only issue I've faced is standard Linux cross-distro issues due to libc versions. Even most of the crates that handle bindings to C libraries link them statically by default.
> To minimize disruption, each change will require
> careful thought, planning, and tooling, which in
> turn limits the number of changes we can make.
> Maybe we can do two or three, certainly not more than five.
> ... I'm focusing today on possible major changes,
> such as additional support for error handling, or
> introducing immutable or read-only values, or adding
> some form of generics, or other important topics
> not yet suggested. We can do only a few of those
> major changes. We will have to choose carefully.
This makes very little sense to me. If you _finally_ have the opportunity to break backwards-compatibility, just do it. Especially if, as he mentions earlier, they want to build tools to ease the transition from 1 to 2.
> Once all the backwards-compatible work is done,
> say in Go 1.20, then we can make the backwards-
> incompatible changes in Go 2.0. If there turn out
> to be no backwards-incompatible changes, maybe we
> just declare that Go 1.20 is Go 2.0. Either way,
> at that point we will transition from working on
> the Go 1.X release sequence to working on the
> Go 2.X sequence, perhaps with an extended support
> window for the final Go 1.X release.
If there aren't any backwards-incompatible changes, why call it Go 2? Why confuse anyone?
---
Additionally, I'm of the opinion that more projects should adopt faster release cycles. The Linux kernel has a new release roughly every ~7-8 weeks. GitLab releases monthly. This allows a tight, quick iterate-and-feedback loop.
Set a timetable, and cut a release with whatever is ready at the time. If there are concerns of stability, you could do separate LTS releases. Two releases per year is far too short, I feel. Besides, isn't the whole idea of Go to go fast?
>If you finally have the opportunity to break backwards-compatibility, just do it.
I think Russ explained pretty clearly why this is a bad idea. Remember Python 3? Angular 2? We don't want that to happen with Go 2.0.
>Additionally, I'm of the opinion that more projects should adopt faster release cycles.
I am of the opposite opinion. In fact, I consider quick releases to be harmful. Releases should be planned and executed very carefully. There are production codebases with millions of lines of Go code. Updating them every month means that no progress will be made at all. The current pace is very pleasant, as most of the codebases I work with can benefit from a leap to newer version every six months.
Breaking things aren't want makes people want to move; it's a cost, not a benefit. Now, you need to offer benefits to get people to pay the cost for those benefits. The Python 2/3 problem was because there was too much breakage for the perceived benefit (especially early on) for many users, not because there was too little breakage.
Fwiw, I think breaking implies improving. Arguing that breaking doesn't inherently mean improving is.. well, duh. So in the case of many peoples comments here, "not breaking enough" means not improving enough. I know this is an obvious statement, but I feel like you're arguing a moot argument.. i.e., being a bit pedantic.
As an aside, since you're making the distinction, can you have meaningful benefit without breakage? Eg, you're specifically separating the two - so can you have significant improvements without breakage?
It would seem that pretty much any language change, from keyword changes to massive new features, breaks compatibility.
> As an aside, since you're making the distinction, can you have meaningful benefit without breakage?
Sure, in two ways:
(1) Performance improvements with no semantic changes.
(2) New opt-in features that don't break existing code (such as where code using the new feature would have just been syntactically invalid in the old version, so the new feature won't conflict with any existing code.)
There be no reason for SemVer minor versions if you couldn't make meaningful improvements while maintaining full backward-compatibility.
I think I'm simply wrong here. I was envisioning "breaking" as being incompatible with Go1. If Go2 was a superset of Go1, it would be allow Go1 code to run flawlessly in Go2 and still allow any new keywords/features.
My assumptions were incorrect, and writing my reply to you sussed it in my head. Thank you for your reply, sorry for wasting your time :)
>As an aside, since you're making the distinction, can you have meaningful benefit without breakage? Eg, you're specifically separating the two - so can you have significant improvements without breakage?
One way is by having a new language feature which does not interact with anything else in the old version of the language, i.e. is orthogonal.
It's barely related to the topic, but another team in my company is thinking about rewriting a pretty large codebase in ObjC into something more sustainable. They briefly discussed Swift, but decided against it exactly due to its relatively frequent breaking changes.
Which emphasises even more the fact that large private codebases really dislike breaking changes.
Adding generics isn't even backwards-incompatible. AFAIK Java 1.5 was just fine with Java 1.4 code, and so was C# 2.0. The latter using reified generics (and not cheating with builtins) lead to the duplication of System.Collections but given builtins aside as of Go 1.9 it has under half a dozen non-generic collections (versus 25 types in System.Collections, though some of them didn't even make sense as generics and don't have a System.Collections.Generic version) that's unlikely to be an issue.
True, but C# also has nominal types, overloading, and explicit interface implementation. Adding generics without breaking existing code without those features looks very difficult to me.
Too many people are wanting "magic" in their software. All some people want is to write the "Happy Path" through their code to get some Glory.
If it's your pet project to control your toilet with tweets then that's fine. But if it's for a program that will run 24/7 without human intervention then the code had better be plain, filled with the Unhappy Paths and boring.
Better one hour writing "if err" than two hours looking at logs at ohshit.30am.
there's nothing magic about generics. every time you make a channel or a map in go, you're using a generic function even if go people don't want you to call it that.
there's nothing magic about exceptions, too, it's that it's harder than necessary to use them correctly and that's why it's not as big of a deal to not have them in go - as evidenced by this thread.
No, it was adopted quickly because it's a simple, script-like language producing static binaries that didn't require any runtime, highly portable with built-in cross-compilation, requires very little resources, is able to perform pretty damn good and has an extensive standard library (although not perfect) supporting many modern technologies.
That are a lot of boxes it ticks off. I remember wanting to start hobby projects with friends where the end-result would preferably be a native binary, and the limit was that I was the only-one with a C/C++ background. Now anyone with some scripting background can pick up Go in a few days, without having to worry too much about a build system, portability, cpu architectures, performance, memory leaks, ...
Sure, Google was the initial push, but it only really took off once projects like Docker showed it's potential.
I think that's the point really. I think the average programmer like using generics, but rarely builds their own code with them. I've used them from time to time to great effect, but the use cases were pretty isolated. Maybe that makes me average? /shrug
If you declare a map[string]int, Go will guarantee that keys are always strings and values are always ints, and it's a compile-time error to pass it to someone expecting a map[string]float64. Without generics, the only way to do that is to generate and compile a MapStringInt struct and a MapStringFloat64 struct and a...
In Go, the magic right now is actually in special-cased types like map. Generics actually have the potential to reduce the amount of magic as it exists currently.
The design can go badly wrong, of course, but it can also go wonderfully right. Generics are a very important feature to consider for the language.
I have no opinion on exceptions with regard to Go specifically. I think they serve a good purpose in other languages but are often misused.
Yes, I had read in an article about/by Walter Bright about D, that that was the insight he had, when working on the design of generics or templates for D:
To paramet(e)rize the types (of a class or function or method), just like you parametrize the arguments of a function or method.
Update: searched for and found the article that mentions that:
[ Walter: We nailed it with arrays (Jan Knepper’s idea), the basic template design, compile-time function execution (CTFE), and static if. I have no idea what the thought process is in any repeatable manner. If anything, it’s simply a dogged sense that there’s got to be a better way. It took me years to suddenly realize that a template function is nothing more than a function with two sets of parameters –compile time and run time–and then everything just fell into place.
I was more or less blinded by the complexity of templates such that I had simply missed what they fundamentally were. There was also a bit of the “gee, templates are hard” that predisposes one to believe they actually are hard, and then confirmation bias sets in.
I once attended a Scott Meyers presentation on type lists. He took an hour to explain it, with slide after slide of complexity. I had the thought that if it was an array of ints, his presentation would be 2 minutes long. I realized that an array of types should be equally straightforward. ]
So much this. Been doing this for 25 years and I loathe magic like Exceptions. I consider Go's explicit error handling a plus. Any editor would let you code ife to generate the error scaffold.
I don't hate generics, but I don't require them either. If the Go guys can make them work well and fit into the language i'd be fine with it, but I don't want them to feel pressured into throwing something out there for the language purists. I doubt they feel pressured. They've been around the block and have as good a sense as any group i've ever seen for what to include in the language.
Go is not the language to use if you want to write clever code, but it's great when you write code for businesses that must run 24x7.
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.
This is sampling bias at work. The people who need generics have long since given up on Go and no longer even bother participating in Go-related discussions, because they've believe it will never happen. Meanwhile, if you're still using Go, you must have use cases where the lack of generics is not a problem and the existing language features are good enough. Sampling Go users to try and find compelling use cases for adding generics is not going to yield any useful data almost by definition.
> no longer even bother participating in Go-related discussions, because they've believe it will never happen
/raises hand
I like when tools are good, but I've basically written off Go as a tool for generating unsustainable code right now (and a big part of it is the odious options, either type-unsafety or code generation, for things that are trivially handled by parametricity). If things change, I'll be happy to revisit it, but the described thought process just ignores my existence. And that's fine from my perspective, because I don't need Go, but I'd still like it to be better in case I do need to use it.
I think the Go team would still like to understand your production uses that caused you to write off Go. What is your problem domain? How would you accomplish that in Go? How did you actually solve your problem with a different language?
For me, user provided data structures are the only thing that comes to mind (sync.Map for example) in my production use of Go. But even then, my pain (time spent, errors made) due to a lack of generics is low.
You may have written off Go. But if you're willing, I bet the Go team would still like to read your production code use-cases that caused you to do so.
My "problem domain" is "good, correct code". I write code in many different spaces, from web software to mobile apps to (a lot of) devops tools to (less these days) games. My criticism of Go isn't "it's not good at code for my problem domain," it's "it's not good at code for your problem domain, either, and your workarounds shouldn't need to exist."
User-provided data structures are a big one. (A language where I have to copypaste to have typesafe trees is a bad language.) But, beyond that, I build stuff to be compiler-verified. Here's a trivial example that I did yesterday that I straight-up can't do in Go, but did in C# as part of a object graph rehydration routine (where object A required pointers to object B from a different data set, and since it's 2017 we aren't using singletons so the JSON deserializer takes in converters that look up objects based on keys in object A's JSON representation):
public interface IKeyed<T> { T Key { get; } }
Just being able to do that on an object is powerful. C# has reified types; I know what T is at runtime. (I can't specialize on the type in C#, but I can fake it.) But I also know what it is at compile-time, and so I can't accidentally pass `IKeyed<String>` and `IDictionary<Int32, DependentObject>` to the same function because type constraints, yo.
I don't really care about fast code, because computers are all future-computers-from-beyond-the-moon. If I'm using a statically-typed language, I care about correct. Duplicated code is code that will--not may--lead to bugs. State I have to maintain in my head (like "this is an int map, talking to an int-keyed object set") is state that will--not may--lead to bugs.
If you're a statically-typed language that isn't helping me avoid bugs, you might as well not exist. You don't have to be whomping around stuff like Scala's Shapeless--I would argue that you shouldn't--but you have to provide at least a basic level of sanity and the difficulty of expressing stuff outside of parameterized types (even when there are workarounds) makes it not worth the hassle. I'll cape up for damned near everything in at least some context, from dynamic languages like Ruby and ES6 to Java (well, Kotlin) or C# to Modern C++ to Scheme or a Lisp. My no-buenos on programming languages are limited pretty exclusively to C and Go because both languages encourage me, encourage all of us who use them, to write bad code.
I'm new to Go and actually and don't do professional IT work but I immediately felt the need for having a type that allowed me to store any type.
My use case was/(still is) writing a spreadsheet where a user can enter different stuff in a cell and I want to store the value in an underlying data type. I now ended up storing everything as string because I couldn't figure out an efficient way to implement it.
My goal would have been to have one generic type that can store the cell types text and number which I can store in an array. This generic type could then implement an interface method like format() which calls a different format method depending on the type the cell has.
I played around with interface and reflect.TypeOf but lost too much performance compared to just storing everything in a string. Strings on the other hand now fill up my memory with stuff I don't need - at least that is my impression.
I don't have programming experiences so maybe I misunderstand the discussion or just overlooked a way to solve my issue in an efficient way. So sorry if the example I mentioned something easily doable in go.
> My goal would have been to have one generic type that can store the cell types text and number which I can store in an array.
FWIW generics wouldn't help you with that, "sum types" would. A sum type is a souped-up enum which can store associated data alongside the "enum tag", so you can have e.g.
enum Foo {
Bar(String),
Baz(u8, u32),
Qux,
}
and at runtime you ask which "value" is in your enum:
match foo {
Bar(s) => // got a Bar with as string inside
Baz(n, _) => // got a Baz, took the first number
Qux => // Got a Qux, it stores no data
}
(and in most languages the compiler will yell at you if you forget one of the variants).
So for your use case you'd have e.g.
enum Value {
Boolean(bool),
Integer(u32),
String(String),
// etc…
}
> I played around with interface and reflect.TypeOf but lost too much performance compared to just storing everything in a string.
Maybe try a struct with an explicit tag rather than reflection? e.g.
type ValueType int
const (
TYPE1 ValueType = iota
TYPE2
TYPE3
// … one for each concrete type you want to wrap
)
struct Value {
type ValueType
value interface {}
}
and then you can
switch value.type {
case TYPE1:
value.value.(Type1)
case TYPE2:
value.value.(Type2)
// etc...
I'd expect reflect.TypeOf to be very expensive, this is a check + a cast.
Thanks for that information. I had thought enums can only be used as named types. Good to know they can hold data as well. I had though of that struct as well (ok, not in that professional way with types as constants and iota:-)) but found it a bit annoying that I have to store a type myself although the type itself should already have the type information somewhere itself - consequently that information is stored reduntantly. But I'll definitely try that. Thanks.
> I had thought enums can only be used as named types. Good to know they can hold data as well.
That depends on the language, and enums being sum types also depends on the language.
* Rust and Swift enums are sum types (they can hold data and every variant can hold different stuff), there is also a ton of (mostly functional) languages with sum types not called enum: Haskell, F#, OCaml, … there are also languages which replicate them via other structures (sealed classes in Kotlin, case classes in Scala).
* java enums (a bare "enum" is the C one) can hold data but every variant must hold data of the same type so you'd need up/down casts
Maybe. My intuition is that k8s is going to lose its luster once people actually have to do a little math related to its costs; while I think there are real reasons for something like it in on-prem environments, I think the cloud-in-your-cloud-so-you-can-cloud-while-you-cloud approach currently being rolled out is profoundly unwise. (Which is to say: k8s is functionally something to weld together to make an OpenStack alternative--such as it is--rather than a layer to plop on top of one.)
Docker...yeah. We're stuck with it there. And their historical security posture doesn't make me super excited, but...yeah.
We have a similar viewpoint w.r.t Higher Kinded Types in F#. Much of the motivation for HKTs comes from people wishing to use them how they've used them in other languages, but we've yet to see something like this in our language suggestions or design repositories:
"Here is a concrete example of a task I must accomplish with <design involving HKTs>, and the lack of HKTs prohibits me from doing this in such a way that I must completely re-think my approach.
(Concrete example explained)
I am of the opinion that lacking HKTs prevents me from having any kind of elegant and extensible design which will hold strong for years as the code around it changes."
Not that HKTs and TypeClasses aren't interesting, but without that kind of motivation, it's quite difficult to justify the incredibly large cost of implementing them well. And that cost isn't just in the code and testing of the compiler. This cost also surfaces in tooling, documentation, and mindshare. It could also have an unknown effect on the ethos of F#, which for better or for worse, does bring with it a bit of a somewhat intangible quality that many people like.
Personally, I think Golang should implement generics. But I'm sympathetic with the views expressed by Russ Cox here. And I don't think it's just sampling bias.
This is a fair point. OTOH, just punting entirely seems like the wrong reaction.
Rust, for example, has been thinking about HKTs for a while, and they might not fit well in the language. Rust wants to solve the problems that HKTs solve, and it looks like the solution is converging to ATCs (associated type constructors).
It's taking a while, but Rust is newish, and it isn't taking anywhere near as long as Go is taking to get generics.
It's the same thing in principle, but it's one of those cases where details matter a lot. Generics have been mainstream - as in, implemented in mainstream programming languages used to write millions of lines of production code - for over a decade now. At this point, someone claiming that they need specific user scenarios to convince them that generics are worthwhile, or that they aren't aware of a "good enough" way to implement them, beggars belief.
I have used generics for quite long. When I used it I loved it. Then I moved to a different language for a few years, and lost generics. Now I have written large programs in GoLang too - and frankly I do not miss out on generics at all. Now, I also find reading code with generics isn't straightforward either - it requires a slightly complicated mental model.
So what's happening here? The parent made a reasonable comment, without flaming anybody. All his children give evidence for him (except one but they weren't able to downvote) and yet the parent is downvoted. Votes don't matter but I'm curious about drive-by-downvoting. What does that signify? Fanboyism?
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. […] If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.
Not implementing generics, then suggesting that it would be nice to have examples of generics being used in the wild… You had it coming, obviously.
Now what's the next step, refusing to implement generics because nobody uses it?
> Every major potential change to Go should be motivated by one or more experience reports documenting how people use Go today and why that's not working well enough.
My goodness, it looks like that is the next step. Go users have put up with the absence of generics, so they're not likely to complain too loudly at this point (besides, I hear the empty interface escape hatch, while not very safe, does work). More exacting developers have probably dismissed Go from the outset, so the won't be able to provide those experience reports.
> Not implementing generics, then suggesting that it would be nice to have examples of generics being used in the wild… You had it coming, obviously.
I think you misunderstood. Clearly he meant to ask for examples from the real world that lack generics, but shows how adding them would improve the system.
> Clearly he meant to ask for examples from the real world that lack generics, but shows how adding them would improve the system.
I don't think many such examples will emerge.
First, you have the empty interface escape hatch. It's cumbersome, but it works. The real reason why the repeated use of this escape hatch (instead of proper generics), is lack of compile time type safety, which is replaced by dynamic checks. This introduces elements of dynamic typing that generics could have avoided. This has a cost, which unfortunately is hard to asses.
Second, Go users tolerate the absence of generics for some reason. Maybe they don't really need it, but then they don't have a compelling use case. Maybe they do need them, but didn't realise the limitations of the language would hurt them down the line, but are they competent enough to articulate why generics would be better? They made quite the blunder when they chose Go, after all.
That said, he also wrote this:
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.
Of course he doesn't: Go doesn't have generics. Go users found other ways, thus proving they didn't really need generics. And generics users don't use Go…
Ok, thanks for the clarification. I wonder if there are situations in big teams in which one team that uses generics can take a look at a large Go codebase within the same company and see what they think? They must have received such feedback at Google, surely?
There's at least one high profile example: the standard library itself. It must have been obvious by the time they designed the standard library, before the language was out. They had to have feedback then, just look at the trail of rage against the absence of generics.
They ignored it then —I have no idea why. I'm not sure they'll listen now.
This is a remarkable oversight which makes it impossible to write purely-functional code with Go. We also see this same problem in most other imperative languages, with organizations going to great lengths to emulate const data:
Const-ness in the spirit of languages like Clojure would seem to be a relatively straightforward feature to add, so I don't really understand the philosophy of leaving it out. Hopefully someone here knows and can enlighten us!
There doesn’t need to be a philosophy behind leaving it out. Go was started by selecting only features which were deemed necessary. I think it’s fair to assume the creators of Go didn’t design it for writing purely-functional code, so that’s why it’s not in (yet?)
I believe part of the reason was also some experience with C++, in which you sometimes have to "unconst" some fields of your const classes (the mutable keyword). This is a really ugly and nonintuitive design, so I assume they'd rather take extra care to make sure they don't have to repeat it. Even if it means no const at all.
I don't think this kind of thing is all that non-intuitive if you reframe const as shared vs. unique references. Rust is a good example of this, although with Go you would want to sidestep all the Cell stuff since it's unnecessary.
There are many comments griping about generics. There are many comments griping about the Go team daring to even ask what problems the lack of generics cause.
But take a look at this article about the design goals of Go: https://talks.golang.org/2012/splash.article Look especially at section 4, "Pain Points". That is what Go is trying to solve. So what the Go team is asking for, I suspect, is concrete ways that the lack of generics hinders Go from solving those problems.
You say those aren't your problems? That's fine. You're free to use Go for your problems, but you aren't their target audience. Feel free to use another language that is more to your liking.
Note well: I'm not on the Go team, and I don't speak for them. This is my impression of what's going on - that there's a disconnect in what they're asking for and what the comments here are supplying.
(And by the way, for those here who say - or imply - that the Go team is ignorant of other languages and techniques, note in section 7 the casual way they say "oh, yeah, this technique has been used since the 1970s, Modula 2 and Ada used it, so don't think we're so brilliant to have come up with this one". These people know their stuff, they know their history, they know more languages than you think they do. They probably know more languages than you do - even pjmlp. Stop assuming they're ignorant of how generics are done in other languages. Seriously. Just stop it.)
As much as I want them to fix the big things like lack of generics, I hope they fix some of the little things that the compiler doesn't catch but could/should. One that comes to mind is how easy it is to accidentally write:
for foo := range(bar)
Instead of:
for _, foo := range(bar)
When you just want to iterate over the contents of a slice and don't care about the indices. Failing to unpack both the index and the value should be a compile error.
If `bar` is a slice then the short syntax isn't that useful (although it's shorter than the three-clause for loop equivalent).
But if `bar` is a map or a channel, then that short syntax is very handy.
For comparison's sake, you don't see many PHP users complaining about the difference between `foreach($bar as $foo)` and `foreach($bar as $_ => $foo)`.
I'd also be fine with it if they switched the ordering of the tuple so that only unpacking one thing gave you the element instead of the index. My point is mainly that the behavior you get currently with a slice is not what you want 9 times out of 10.
There is so much boilerplate you have to write that productivity drops considerably, both because you have to re-re-re-re-implement things (or use code generation) and because you need to scan lot's of redundant code before making sense of it.
The alternative is often to just use interface{}, which hurts performance and type safety.
This has so far been a theoretical concern for me only. If you want to write reusable data structures you obviously need generics. But none of the code I've written involves reusable data structures, so I can't say I really miss generics.
The productivity level is nowhere near to Python's yet. You should try Go for a couple of weeks and see that still soooo much missing from Go compared to Python.
Generics have never stopped me from building in Go... But without them I often do my prototyping in python, javascript, or php.
Working with batch processing I'm often changing my maps to lists or hashes multiple times during discovery. Go makes me rewrite all my code each time I change the variable type.
It's weird, I see these complaints so often and I just.. don't.. get it. I'm not sure what I do differently, but I'm just so used to Go's method of non-generic that I don't run into frustration at all.
The only time I even notice it is if I have to write a method like `AcceptInt(), AcceptString(), AcceptBool()` and etc.
I enjoy generics in Rust, I'm just not sure what I'm doing differently in Go that causes me to not miss them.
It's possible your data (and types) are short lived.
When I'm doing text processing for example, I pass the same strings/dicts/hashes through dozens of functions for cleaning, sorting, organising, benchmarking, comparing, etc..
I'm actually coding a LITTLE project with 7 tables and thinking if Golang was the right choice.. but I picked Golang because there is some async realtime component.
My pace of dev is very slow.
What aspect is causing it to be slow for you? Note that there are definitely some areas of Go I find terrible, and SQL is one of them. Check out the SQLx library, it's far less painful than the stdlib SQL is.
I'm so undecided on using GOPATH like that. Though, I may just do it.
Personally I hate Go's imports, and I think the GOPATH is a terrible, terrible idea. Yet, I quite like the vendor directory, and I expect the new `dep` tool to be the final touch.
Now my only problem is that I often forget which projects of mine are in my GOPATH, so I'm afraid to delete them -_-
vendor has big problems too. There are some proposals on how to start fixing them, after the dep tool lands (originally slated for 1.7 or 1.8 but still in Alpha?)
To support the uncommon edge case, src/foo/vendor/dep is completely different to src/bar/vendor/dep.
Your src/foo code can't pass src/foo/vendor/dep types to src/bar code expecting src/bar/vendor/dep types. Even if the deps are identical checkouts. Code can become uncompilable by the act of vendoring a dependency.
You have two completely different sets of globals. Code can stop working by the act of vendoring a dependency.
The initialization runs twice. So well known packages like glog will panic if two packages has it vendored because they both try to update settings in the standard library.
GOPATH would be preferable, but none of the tools for managing packages at that level support pinning. So you dare to be developing more than one package at a time, you end up with a single monolithic repository representing your GOPATH and pin your dependencies in src/, or end up with symlink hacks and other unmaintainable tricks.
If you look two parents up you see my colleague posted virtualgo (https://github.com/GetStream/vg). That's the tool I built at my company support GOPATH and version pinning (which is forwarded to dep). You can use it to solve the exact issue you're describing by using "vg uninstall packageName".
yeah, I don't fully understand this. I've had the least problems with GOPATH than I have with any other language. You literally set GOPATH=${HOME} and it just works.
"The GOPATH issue" is the requirement that I keep my source code checkouts at any particular path, in any particular structure, to be able to use the toolchain.
Solving the GOPATH issue would be letting me "git clone" a Go project to a directory of my choosing and still use the tools.
Multiple unrelated Go projects in different repos located in different directories.
No problem at all with very other language, except with Go - not possible!
I want to open cmd.exe (Win) or bash (Linux) and cd to the directory and build. And not configure a GOPATH in the environment variables of the current user or pretend it to the batch script, set it in IntelliJ Go plugin, etc. It so unflexible, so arkward, and there is no reason for this crap at all. So please fix it for Go 2. Thx
Except you have to set it every time you open a shell. You can’t put it into your .bashrc either unless you only ever work on one project. This workflow sucks:
$ cd projects/foo
$ export GOPATH=$PWD
$ cd src/github.com/company/foo
$ go build
I use one GOPATH and work on dozens of projects. I would hate it if I had to change GOPATH often. In Go, if you find yourself fighting the system, chances are you are doing it wrong.
'go get github/user/project && cd $GOPATH/github/user/project'.
And to avoid dependency issues, I use one of the many vendoring tools. Currently 'govend'.
I'm not following you. I work in multiple repos under multiple orgs. I have my personal stuff, my work stuff, projects I've pulled down to contribute towards, and libraries that I want to test out tweaks upon.
I don't have go/src/ checked in. Is that what you mean? If you do that, you are doing it wrong. I have many, many go/src/$org/$project directories, each of them have their own source control (most of which is different git repos). If I am working on my stuff, cd $GOPATH/src/github.com/sethgrid/$project. That is the repo under source control. Do any development I want, push up any changes to that repo. Rinse and repeat. I can then cd $GOPATH/src/github.com/sethgrid/$project2 and do the same. When I want to pull down $project3, I just `go get github.com/sethgrid/$project3` or manually clone it and I can cd over to it and work on it separately.
You should try out https://github.com/GetStream/vg, it handles managing of multiple gopaths in such a way that cding to a directory switches you automatically.
Now we have at least vendor. But this alien-style fixed paths is really annoying - no other language forces such GOPATH crap. Please remove this limitation from Go 2!
What exactly is wrong with GOPATH? I find it much easier to reason about than imports in say Ruby or Python, which are global names with no hierarchy and are resolved in much less obvious ways.
There's no obvious way to separate personal work, from work work, from exploratory work, from any other way you want to categorise your source directories. You just have to chuck them all in the same root and hope you can remember what each project is for.
You want to manage GOPATH as if it were a secret key!? Not ever storing it in repos, having weird dotfiles storing them locally, having to set up a keystore cluster like consul in production? Yuck.
The point is it's a per-project config. And yeah, there's no reason to store "/home/artur/go" in my git repo - that wouldn't work for my coworkers whose names are not Artur.
The thing I dislike is how such languages try to abstract away the filesystem representation of packages/modules/libraries/etc.
The conventions and resolution approaches that are needed to accomplish this often end up being opaque, less flexible, and harder to work with.
I much prefer the approach of C, C++, and even PHP to some extent, where the interaction with the filesystem isn't hidden.
When using C or C++, it's trivial to reference a globally-installed library, such as the standard libraries of such languages. It's also simple to reference third-party libraries, or even to include local copies of third-party libraries within one's own source tree. Relative or even absolute paths can easily be specified when referencing external code.
This also gives so much more flexibility with regards to how a project is laid out, and how it pulls in other code or libraries it depends on. I can follow an approach that works for me, for my project, for my team, for my version control system, for my development environment, for my deployment environment, and so on.
I want the language to conform to my needs. I don't want to have to modify my behavior and my environments to conform with what the language's developers deem to be the "right way" of doing things, especially when this isn't compatible with my needs.
Maybe this means there's slightly less consistency with how libraries and dependencies are handled across projects and libraries, but that's a cost that I'm willing to pay, and in practice it actually isn't that much of an issue when using languages like C and C++.
I'd much rather spend a few seconds specifying "-I" and "-L" and "-l" options when compiling or linking than trying to remember a bunch of conventions or how the high level package names end up mapping to the installed modules or libraries.
I'm not saying that the C approach is perfect, or that I'd want other languages to use a C preprocessor like approach of actually combining separate source files just prior to compilation or even execution.
But I would really like it if modern languages didn't try to hide the existence of files and directories so much, and didn't try to force conventions on me. I'd rather deal with the very small cost of explicitly telling the compiler where to find dependencies rather than relying on conventions or opaque resolution processes.
As a ruby and go dev, I'm a bit sad to see backward-compatibility going. Thinking I could write code with minimum dependencies and that would just work as is years later was really refreshing compared to the high level of maintenance needed in a ruby app.
But well, I trust the core team to make the best choices.
I like Go concept: very simple and minimalistic language, yet usable enough for many projects, even at cost of some repetition. Generics are not a concern for me. But error handling is the thing I don't like at all. I think that exceptions are best construct for error handling: they are not invasive and if you didn't handle error, it won't die silently, you have to be explicit about that. In my programs there's very little error handling, usually some generic handling at layer boundaries (unhandled exception leads to transaction rollback; unhandled exception returns as HTTP 500, etc) and very few cases when I want to handle it differently. And this produces correct and reliable program with very little effort. Now with Go I must handle every error. If I'm lazy, I'm handling it with `if err != nil { return err; }`, but this style doesn't preserve stack trace and it might be hard to understand what's going on. If I want to wrap original error, standard library don't even have this pattern, I have to roll my own wrapper or use 3-rd library for such a core concept.
What I'd like is some kind of automatic error propagation, so any unhandled error will return from function wrapped with some special class with enough information to find out what happened.
> For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve.
"We estimate that there are at least half a million Go developers worldwide, which means there are millions of Go source files and at least a billion of lines of Go code"
I must say that whenever there is a discussion about the merits of the Go programming language, it really feels hostile in the discussion thread. It seems that people are seriously angry that others even consider using the language. It is sort of painful reading through the responses which implicitly declare that anybody who enjoys programming with Go is clueless.
It also really makes me wonder if I am living in some sort of alternate reality. I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit) have no idea what a reified generic is. I have run into some problems where being able to define custom generic containers would be nice, but I don't feel like that has seriously hindered my ability to deliver safe, functional, and maintainable software.
What I appreciate most about Go is that I am sure that I can look at 99% of the Go code written in the world and I can understand it immediately. When maintaining large code bases with many developers of differing skill levels, this advantage can't be understated. That is the reason there are so many successful new programs popping up in Go with large open-source communities. It is because Go is accessible and friendly to people of varying skill levels, unlike most of the opinions expressed in this thread.
>What I appreciate most about Go is that I am sure that I can look at 99% of the Go code written in the world and I can understand it immediately.
No you really can't. You can look at any given line of code and tell me what it does, where as it might take more time to parse any given line of Haskell, Scala, or OCaml longer.
However, expressivity in a large application pays a dividend tenfold, because the main challenge of reading code is not any given line, it is understanding the application architecture, the frameworks, the data flow and how it all works together.
> That is the reason there are so many successful new programs popping up in Go with large open-source communities.
Successful, or popular? Just look at Docker. Certainly popular, but check the bug tracker or talk to anyone with real world experience using it at scale and you'll be hearing a completely different story.
I think the negative attitude comes more from frustration than from anything else.
> I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit) have no idea what a reified generic is.
I didn't know what it was called either until I saw them mentioned here, but now that I know about it, I get why it would be useful. Before that, I kind of assumed all languages with generics would also allow you to access the type information at runtime.
> I have run into some problems where being able to define custom generic containers would be nice, but I don't feel like that has seriously hindered my ability to deliver safe, functional, and maintainable software.
Interesting read, and I admit there is some of that. But my point isn't to say that those features aren't useful or powerful, but rather that with the constraint of working with a large group of programmers of varying skills, simplicity has more value than power (as long as we can deliver software that meets our requirements). It is similar to point 4 in the "Problems with the Blub Paradox" section.
One of the problems is who decides what is too simple. E.g. why are for loops, function calls and lamdbas considered simple enough to be in go, but generics aren't?
When I TA'ed CS introductory courses, student usually had a lot less trouble with understanding generics than lambdas.
Not including feature X in programming language Y will also ensure that no-one who primarily uses Y will ever come to understand or appreciate X.
The blub paradox basically represents the opinion that it is always better for languages to include more powerful features.
I think that goes too far, working primarily in Scala I am seeing firsthand how much diminishing returns extra complexity in the language can have, but I'm certainly of the opinion that Go leans too heavily toward handicapping the language in pursuit of simplicity.
The complexity from generics isn't from the conceptual standpoint, it's from the resulting code standpoint... for the same reason that the ternary ?: operator is "too complex": it's really easy to say that `foo := cond ? 1 : 0` is better than the if/else alternative, but `foo := (a > (b.Active() ? Compare(b, a) : useDefault() ? DEFAULT : panic()) ? "worked" : "failed"` is a mess waiting to happen.
Same with generics. It's easy to point to simple examples. It hard to ensure that terrible metaprogramming monstrosities don't arise.
It's possible to write bad code with go as it is, but it's actually difficult to make such bad code inscrutable.
Maybe, but the solution to this is not to say: Only the compiler implementers are smart enough people to not create messy code with generics, only a few built-in data types are allowed to be generic and the rest will have to copy-paste implementations or use lose type information through `interface {}`
FYI, reified generics undermine type safety and correctness in general, which is one of the reasons why so few languages support the concept (another is that erased generics allow for easier interoperability and for the creation of a much wider class of static type systems).
In my experience, most people who think they need reified generics don't really understand them and can actually do fine with much safer concepts expressed on an erased runtime.
It is because some people here who have no stake in Go programing seem extraordinarily angry or maybe just concerned that why the hell Go authors didn't just override Go programmers' and listened only to enlightened PL experts here and at Reddit.
If it were a topic of gender or race instead of programing language I am sure this behavior would be called out as mansplaining/whitesplaining.
I do wish some term like 'PLsplaining' be used for this infuriating behavior of calling Go/PHP or whatever programmers clueless.
Having used C#, C/C++, Ruby, Perl, PHP, and Javascript, I find golang to be one of the most refreshing approaches to programming among them all. I think many people are finding themselves feeling upset because they are convinced that the simplicity of golang implies only idiots use it which then must mean that the language will be a failure (almost at an a priori level).
> It also really makes me wonder if I am living in some sort of alternate reality. I am a professional programmer working at a large company and I am pretty sure that 95% of my colleagues (myself included, as difficult as it is for me to admit)
What are you and your co-workers using aside from Go?
Mostly C and C++. Personally I am working in a C++ codebase right now and I struggle with all sorts of finicky cross-compilation and linking problems that just wouldn't exist in Go.
We also do some Python and Javascript for scripts and web work.
I don't think anyone has been arguing against it's ease of use nor the ease of readability. Most people seem to be against it's ergonomics and intentional lack of features.
I like Go and it's intentional lack of feature or very slow speed of adding features is my favorite feature. May be I am a pseudo programmer like most people who dislike Go would probably say.
I'm not saying it's a good language or not. I haven't written in it enough to determine that. I know enough to say that anyone who says it's not a "Real Programming Language" is full of it.
This is a frustratingly common way to design mainstream PLs: "show me the use case". I've seen it firsthand for a number of big PL projects. People are trapped in their little bubble. My approach is to be wildly polyglot, actively searching for good ideas in other languages. Also, I try to write complex code outside the intended domain space and understand why it's harder than it should be.
For example, in Go it's difficult to implement state machines in a clean way. Another is handling events and timeouts is easier with Reactive programming. Distributed programming is easier with Erlang or Akka. Don't wait for problem reports in the Go community. Look at the problems in other PLs and proactively improve Go.
This post really frustrates me, because the lengthy discussion about identifying problems and implementing solutions is pure BS. Go read the years worth of tickets asking for monotonic time, and see how notable names in the core team responded. Pick any particular issue people commonly have with golang, and you'll likely find a ticket with the same pattern: overt dismissal, with a heavy moralizing tone that you should feel bad for even asking about the issue. It's infuriating that the same people making those comments are now taking credit for the solution, when they had to be dragged into even admitting the issue was legitimate.
His anecdote about Google/Amazon using leap smears to solve the problem is telling. I suspect that they were unable to see outside their own bubble to think about how others may be impacted.
> We did what we always do when there's a problem without a clear solution: we waited
The original problem described the issue and the potential consequences very well and the problem didn't change between the initial report and when cloudflare hit it. It was only until a Serious Industrial User (to borrow a term from Xavier @ OCaml) got bit in a very public way that they actually began thinking about what a clear solution would look like.
> We did what we always do when there's a problem without a clear solution: we waited
And this is exactly why the Go designers don't understand language design, and how this ignorance shines through every single place in their language.
Language design is about compromises. There is never a perfect solution, only one that satisfies certain parameters while compromising others. You, the designer, are here to make the tough choice and steer the language in a direction that satisfies your users.
Besides, characterizing their position as "We waited" is very disingenuous. First of all, this is more stalling than waiting, and second, waiting implies they at least acknowledge the problems, which the Go team famously never does. Read how various issues are answered and then summarily closed with smug and condescending tones.
You are being deliberately negative here. Choosing to forego generics in favor of simplicity (and its impact along several axes) is a postcard example of a compromise. It is a tough choice that many people will be unhappy with, but there are also many Go programmers that are extremely satisfied with that direction.
As for acknowledging, well, they have always been very clear about their position. It makes no sense to spend a decade answering the same question over and over with a long and elaborate response which the person asking has already seen and dismissed. I can understand them becoming condescending after a decade of facing people who act with a deliberately obtuse and holier-than-thou attitude.
It's not like they have been lazy - every release of Go has had a large amount of improvements that matter. Working on generics would have meant sacrificing a (probably large) amount of them.
Given Google's orientation towards server-side web applications, it makes sense. On the other hand, the real-time OSs such as QNX have had monotonic clocks available for decades, and they use them for all delay and time interval measurements. (In real-time, you use the monotonic clock for almost everything that matters, and the day clock for logging and display only. You don't want the clock used for delays and time intervals to "smear"; it might have physical effects if it's being used to measure RPM or something and seconds got slightly longer.)
Go is great for server-side web applications. The libraries for that are all there and well debugged. Anything else, not so much.
Regarding the leap second bug, I suspect this is an example of perfect being the enemy of the good.
It appeared to me that the golang devs believed so strongly in the superiority of leap second smearing that waiting for everyone to adopt it was better than compromising their API or the implementation of time.Time.
This hit a nerve, so I'm going to leave it up, but after thinking about it I also don't like how purely negative it is. I can't edit it anymore but I do want to make a follow up to say something things explicitly:
I want to be clear I'm thankful for the effort everyone has put into golang. Making something people care about enough to love or hate is hard. Stewarding a FOSS project can be a very negative thankless experience as well. While I'm being critical above, there are also lots of examples of golang folks, core or otherwise, making really constructive and productive comments and contributions. I don't mean to trivialize that or ignore it.
Note that at no point in the post whatsoever were any contributions to these specific issues from outside the core maintainers thanked or even acknowledged.
FWIW I think this is a fair criticism. We've had so much help from the Go community, not just for those two issues but also for essentially all the work we've done since the release in 2009. Go simply wouldn't exist without the community, and we're very grateful for it.
I couldn't fit any kinds of thank yous into the talk, except for the credit to the overall community in the first few minutes, because I had a lot to say and only 25 minutes. It's possible I should have written an extended blog post that was more than just the talk, but I didn't - the blog post is just the talk, as it says.
Would you mind providing examples of what you are talking about? My experience has been different - I find that the golang maintainers are thoughtful but also very very (very) busy people. So I find they tend to respond in a terse, and sometimes what might seem like an unfriendly way, but I haven't seen any of what you mention here.
Same, and to add: Go 2 has been planned for a while (since the beginning?) and I'm pretty sure the core developers tend to defer major new feature ideas (e.g. generics) to Go 2.
If they implemented everything everyone asked for they'd have another screw-up like C++ or Java.
I know that both Java and C++ have a lot more issues than Golang ever will. I've used all three languages. Java has mile long class hierarchies, massive try-catch blocks, indentation as thick as my neck, and runs in a JVM. It's so bad there are already other implementations that people would much rather use. I also really don't like the file/project naming conventions.
C++ is a whole other animal. The whole system of a Go program is typically very penetrable. You can trace the code down to the bottom of the stdlib or even the compiler builtins in seconds with ease (thanks to the excellent Guru and vim-go). In my experiences of C++, I doubt how many C++ programmers have ever had a look to the implementation of STL or iostream, which is hard to do ergonomically. Although some people occasionally call the opaqueness encapsulation and see it a good practice (which I don't agree). There are lots of cross compilation and static linking problems that don't arise with Go. There is the problem that one C++ code base will look completely different from others because no one in the C++ world follows conventions the way people in the golang world do.
Compared to them Go is a very well thought out and elegant language.
> know that both Java and C++ have a lot more issues than Golang ever will.
No you don't. But even if I granted that Golang has a bright future (I don't think it does) by any objective measure C++ and Java are more successful than Golang. LOC, number of devs, performance, install base, etc. Pick a measure other than current hype (not even peak hype) and either Java or C++ trounces Golang.
> I've used all three languages.
I have too. I program golang full time and have for 3 years. I'd switch tomorrow to either of the other languages if I could wave a magic wand (and I'm a fair hater of both).
> and runs in a JVM
I desperately miss the JVM (any of the ones I've used). The amount of sophistication and polish in comparison to the Golang runtime is embarrassing. Debugger support, operational sophistication, IDES, tooling, basically everything is better on any of the JVMs I've used in comparison to the golang runtime.
>Java has mile long class hierarchies, massive try-catch blocks, indentation as thick as my neck
And go has ridiculous copy/paste libraries, horrendous concurrency edge cases, terrible error handling, worse third party libraries and no story around packages.
> There are lots of cross compilation and static linking problems that don't arise with Go.
Because dependency management is not possible in golang. It is literally the worst story in any language I've used in 20 years. You have 2 choice in golang half baked vendoring or a monorepo where you have all of your code in one place.
> Compared to them Go is a very well thought out and elegant language
No, compared to them Go is a young language. There aren't any big projects in Golang yet. We shall see if there ever will be. My theory is that either none will ever happen as some other language will be a better choice, or golang will adapt and all the things people claim are benefits (simplicity, default tooling) will go away, crushed under the reality of complex projects being complex.
>Compared to them Go is a very well thought out and elegant language.
But interestingly, not one I would choose with which to embark on a project that I expected to require more than 2000 lines of code (as a ballpark scale measurement).
Go seems to be an excellent glue language. It's very suitable for making a tool that does /this/ and only /this/, putting it in a docker container and running 500 of it at once, linking other parts of my stack together.
I would never use it to write a RDMS, because (as someone who has admittedly been following the situation only loosely) the language maintainers don't seem interested in solving other-people problems. They seem to be making a language for them, and I can respect and understand that, but it certainly gives me pause to think about using it for anything big. If I run into a problem that Go can't solve, will it be possible to persuade the language developers to give a shit? Situation unclear.
> overt dismissal, with a heavy moralizing tone that you should feel bad for even asking about the issue
The attitude of Go community cannot be separated from the patronizing tone of Go maintainers. In fact it stems directly from the people @ Google working on Go. All the bullshit "you don't need that with go"™ comes directly from Pike,Cox and co. It's fine to be opinionated, but just admit these are opinions instead of engaging into the bullshit they engaged in for years when dismissing this or that problem as irrelevant.
Pike said "Go design" is done. Except that a language which design "is done" is a dead one.
I'm still sitting here shocked that a language where "err" (and the keywords that check around it) are used an order of magnitude more frequently than all other syntax in that language, has achieved this much popularity: https://anvaka.github.io/common-words/#?lang=go
That's what kept me from looking at the language for the last five years. I finally broke down and wrote my first Go program. Yeah, the errors (and lack of generics) are annoying, but it gets enough else right – tooling, runtime speed, compilation speed, type inference, parametric polymorphism (on interface types) – that I enjoyed it anyway. Every language has some annoyances; it's nice when there are only a few.
Not only that, but you can just ignore errors during prototyping and with a little editor magic you can go back easily generate at least 85% of the if err != nil {... etc. statements.
As much as people complain about it, explicitly checking errors like this (and checking them all) is almost essential for production quality enterprise code and any code running on mission critical systems (and to most project managers, all systems are mission critical).
And fwiw I still prefer go's style of error checking to languages that implement massive try/catch blocks.
I think the "embrace failure" supervisor model in BEAM is a superior design for mission-critical systems (such as cell networks, where it originated) while resulting in literally a ton less boilerplate. You just code the "happy path" and done. Any errors are logged and the process instantly restarted by the supervisor, you can follow up on them if they present a problem.
If you were to code just this "happy path" in Go, it would actually be the unhappy path, because you'd have silent errors resulting in indeterminate state (read: horrible bugs to troubleshoot). If you believe programming is mostly about managing all possible state (as I do), then Go's strategy is way too explicit. In Erlang/Elixir/BEAM's case, as soon as state goes off the rails (read: encounters something the programmer didn't anticipate), it's logged and restarted by its supervisor process almost instantly. (and there are of course strategies for repeated restarts)
People keep saying this and I struggle to understand how this represents good design. In any language I can always just eat the error and keep going, or eat the error and restart.
That doesn't fix the error, and it doesn't imply the program will work correctly.
> In any language I can always just eat the error and keep going, or eat the error and restart.
Defer to pmarrek on what they meant, but to me it's an issue of practical programming.
In a choice between "I will tell you what to do about errors" vs "I will assume you only crash and restart on any error", I've found the later to be far more efficient.
The former generally leads to a rat's nest of never ending error specialization as unexpected or rare stuff bubbles up in UAT or down the road in prod.
Which isn't to say there's a right answer. There's always going to be particular situations where of course you should use one or the other.
But on the whole, as a philosophical default, fail-and-recycle-on-all-errors is a helluva lot easier to spec, code, test, and maintain for me.
I think you need to try it out to see what the big deal is. It's pretty liberating.
The thing is, there will always be the "unknown unknown" bugs, the ones you didn't even think to anticipate, and Erlang/Elixir/BEAM will always win out in a behavior contest on those vs. in Go, because it is built to anticipate any possible error, not just the possibilities you're aware of.
so the argument is that there are (broadly) two classes of errors. ones that happen all the time and ones that happen hardly ever
the erlang strategy is to make it possibly to continue from a known good state when the latter happens where chances are it won't happen again. the errors that happen all the time you will encounter early/often enough that you can work out how to handle/fix them
the result of this strategy is you pretty much never write code that handles the unexpected. it turns out that's a lot of code to not have to write
Those supervisors don't always handle the errors the way you want and can't always be implemented without diminishing overall performance. It's a trade-off.
It's convenient to write this sort of optimistic code that Erlang encourages, but the error messages are often not very helpful. You'll often end up with a pattern matching failure and stacktrace that isn't very clear compared to a hand-written error message that you would get in Go.
Have you used erlang? Pattern matching is a big part of it, and pattern match failures always result in errors. So, if you have some function foo() that always returns ok or {error, Reason}, then in the "follow the happy path" style, you'd just have a line reading "ok = foo(),", and then if foo actually returns something else, you get a "crash". There's no assuming anything there, it's just the way the language works.
Yes I have. but I see the erlang blindness still runs strong. Just because you have pattern matching doesn't mean you don't have bugs cause by humans. It can be syntactically correct and also logically not match the problem domain.
For example what if you screw up and forget to encode that it SHOUDN'T return OK?
you know that many other frameworks also have these safeguards right? Java server frameworks have been doing it forevever. You can code dirty if you want and deal with no issues or you can be error prone. Hell you can do the same thing in go. You can catch panics. It's not pretty but you can definitely do it.
I forgot one can only point out strengths of erlang not problems with humans coding in general. all hail erlang.
Logical errors are always an issue, regardless of the language.
As for Erlang's problems, it does have a fair number of warts and special cases, but the biggest of them all is that it's still a niche language. We actually had to dump Erlang for new projects in favor of Go. That may sound ridiculous, but filling positions involving Erlang was next to impossible (even with candidates without any prior knowledge of the language).
Why wouldn't it, if you're pattern-matching correctly? And in any event, it seems far easier to get into that situation in Go (which ignores errors unless you explicitly check for them) than in Erlang (which crashes on every error because restarting is cheap)
My point is... "Pattern matching correctly"... assumes no mistakes on the coder's part. Go back and re-read my comment. Or don't. Downvote me because I dare question your skill in the context of the mighty erlang.
I would never downvote someone simply for disagreeing, and certainly not unless I was 100% sure they were wrong or it was way offtopic. (And you would be surprised how often I admit I was wrong or cede a point!)
You're basically saying "well, it's still subject to logic errors." Well, that's a nonargument, because you can say that about 100% of languages in existence. That is not a criticism you can levy just against BEAM langs. And we were never arguing that BEAM magically takes care of your logic bugs. Just that it seems to handle the "set of all bugs that are not logic bugs" better.
I also apologize for erlang/elixir fans being rabid and insufferable. ;) A lot of us have dealt with many other langs (I spent years on Ruby and other OO langs) and we're enjoying the unexpected (to us) benefits of this new/old world, that's all. Joe Armstrong is basically Einstein AFAIC...
You can wrap all your code in an unchecked exception and never again have to worry about errors. It's bad practice obviously but there are many situations in which you don't care what error occurred but that an error itself occurred.
Is not what software engineers should spend time on, definitely.
Not automating ubiquitous trivial propagations with at least explicit "rie <expr>" (rie for "return if error") is a complete engineering fail under any philosophy.
Oh, I have an idea. If ", err" part is missing from lvalue, insert that mantra automagically under #pragma ARIE=on. This workaround will give designers some stats to embrace.
I consider that explicitly and manually handling every possible error, and making a conscious decision to bubble it up (return) or to interpret it is a very good use of my time as a programmer.
> Is not what software engineers should spend time on, definitely.
But hey! Your editor can easily boilerplate all that... boilerplate!
I think Go will continue to grow until we finally have metrics that say that coding something in Go might get you fast-running code but will be slower to code and hell to maintain.
If it weren't for Goroutines I would not find go very useful. I expect C++ will implement coroutines and/or fibers someday that will be very well designed and more carefully/broadly thought out.
No, it's way worse, because you have to write the error handling multiple times even if it is all the same (and it usually is).
For example, in Java, I can make as many method calls I want inside of a try block, and catch an exception from any one of them in the catch block. In Go, I would need the equivalent of one catch block per method call, in the form of "if err != nil { return err }". Why repeat yourself?
If you want to pass on an error in Go, its easy to do accidentally -- just omit "if err != nil". Both types of exception handling models enable this kind of behavior. If you are going to do it, you might as well not repeat yourself while you are doing it.
You can, but you should not be forced to do so. I could easily do that in Java if I wanted to, but I also have the flexibility to handle them all in the same way, or pass them back to the caller (which the right thing to do 99% of the time), with a very small amount of code.
To be honest moaning about Go idioms is like moaning that Java is object orientated or Lisp has too many braces. If you don't like the idioms of a particular language then don't use that langauge. There's plenty of others to choose from. But moaning that language x should be more like language y is just daft as it misses the point of why language x decided to do something different in the first place.
I'm not saying Go's error handling is better than Java's. But I've written some relatively large and complicated projects in Go (like the Linux $SHELL and REPL scripting language I'm currently coding this very moment) and error handling really is the least of the problems I've been having. Sure, Go's idioms do get in my way from time to time. But on balance Go's idioms save me more time than it wastes. Which is why I keep coming back to Go despite having used more than a dozen other languages in anger. But that's my workflow and my personal preference. Others will find the reverse to be true and I wouldn't be complaining that their language should be more like Go.
Frankly I think people waste too much time comparing languages. Just try something and if it doesn't work, move on. Don't assume your personal preference is a benchmark for how all languages should be designed as there will be a million other developers whose personal preference will exactly oppose you.
I remember in Python some class / module would have different fields in the error object, so it was really difficult to do something meaningful with those errors.
There is also the switch case you need to make when catching an error, like tcp connection then dns resolution and so on, you have to know every time any kind of sub errors.
Frankly, I've never seen that (try / catch / ignore exception) in any commercial codebase I've worked on, running into tens of millions of lines.
The worst I've seen is people logging the error message but not the exception itself (so losing the stack trace), or causing another exception to be thrown (and losing the stack trace or cause of the original).
I don't understand this comment - Python does what I'd argue to be the "right" thing, which is that operations which can fail raise exceptions that unwind the stack. If you don't care to handle failure in a particularly granular or graceful way (e.g., you're a command-line process, or you're a server process with lots of independent requests), you can have one large catch statement around all of your work, or even just use the implicit one around the entire program provided by the runtime.
Meanwhile, well-written C code does this the "wrong" way; every external call has to be done like
if (read(fd, buf, len) < 0) {
hopefully remember to free some things;
return NULL;
}
Why do you think explicit error handling is a Pythonic practice?
C99 took a while to be widely adopted mainly because some compilers where very slow on the uptake (I'm looking at you Visual C++) but now I would never consider using anything less than that for a new project. It's almost 20 years old!
Many small quality of life improvements: stdint.h stdbool.h, inline, restrict, allowing variable declaration within the code, snprintf, variadic macros...
Most of those were accessible through compiler extensions but having them in the standard means that you know it'll work everywhere on any compliant implementation.
Linux definitely isn't conservative when it comes to the language, not only does it use modern features of the language but it even relies on GCC extensions and even sometimes on the compiler's optimizer to produce the correct code: https://unix.stackexchange.com/questions/153788/linux-cannot...
The discussion is not about identifying problems. The issue is communicating the problem and explaining the impact. The Go team was wrong about monotonic time but the correct response is not to assign blame. Concrete examples and conversations are a better way to communicate a problem. This is what they are trying to communicate now.
And if you _do_ want to assign blame, it's fine to put it on us. I did as much in the talk:
"Just as we at Google did not at first understand the significance of handling leap second time resets correctly, we did not effectively convey to the broader Go community the significance of handling gradual code migration and repair during large-scale changes."
Regarding the lacking of generics problem, is there a way to get around it, there are always plenty of tools doing that, if the IDE can patch the syntax and support certain kind of prigma to generate the template code, then the problem is almost solved, not sure if it'll cover all cases like Java does though.
This was announced at GopherCon today. FYI, if folks are interested in following along other conference proceedings, there is no livestream, but there is an official liveblog: https://sourcegraph.com/gophercon
about generics : i've never had a deep look at it but i've always wondered if most of the problem couldn't be solved by having base types ( int, string, date, float, ..) implements fundamental interfaces (sortable, hashable, etc). i suppose that if the solution were that simple people would've already thought about it.
in particular, i think it could help with the method dispatch, but probably not with the memory allocation ( although go already uses interfaces pretty extensively).
Most of the discussion here seems to be around generics, and it sounds like they still don't see the benefit of generics.
I like Go, but the maintainers have a maddeningly stubborn attitude towards generics and package managers and won't ease up even with many voices asking for these features.
> We did what we always do when there's a problem without a clear solution: we waited. Waiting gives us more time to add experience and understanding of the problem and also more time to find a good solution. In this case, waiting added to our understanding of the significance of the problem, in the form of a thankfully minor outage at Cloudflare. Their Go code timed DNS requests during the end-of-2016 leap second as taking around negative 990 milliseconds, which caused simultaneous panics across their servers, breaking 0.2% of DNS queries at peak.
We proceeded that way for several years. It was annoying, but tolerable because of Dart's optional type system -- you can sneak around the type checker really easily anyway, so in most cases you can just use "dynamic" instead of a generic method and get your code to run. Of course, it won't be type safe, but it will at least mostly do what you want.
When we later moved to a sound static type system, generic methods were a key part of that. Even though end users don't define their own generic methods very often, they use them all the time. Critical common core library methods like Iterable.map() are generic methods and need to be in order to be safely, precisely typed.
This is partially because functional-styled code is fairly idiomatic on Dart. You see lots of higher-order methods for things like manipulating sequences. Go has lambdas, but stylistically tends to be more imperative, so I'm not sure if they'll feel the same pressure.
I do think if you add generic types without generic methods, you will run into their lack. Methods are how you abstract over and reuse behavior. If you have generic methods without generic classes, you lose the ability to abstract over operations that happen to use generic classes.
A simple example is a constructor function. If you define a generic class that needs some kind of initialization (discouraged in Go, but it still happens), you really need that constructor to be generic too.