Hacker News new | comments | show | ask | jobs | submit login
Go 2 Draft Designs (googlesource.com)
909 points by conroy 55 days ago | hide | past | web | favorite | 473 comments



Looks good. The control flow of "handle" looks a bit confusing, but overall I definitely like the direction this is going.

The decision to go with concepts is interesting. It's more moving parts than I would have expected from a language like Go, which places a high value on minimalism. I would have expected concepts/typeclasses would be entirely left out (at least at first), with common functions like hashing, equality, etc. just defined as universals (i.e. something like "func Equal<T>(a T, b T) bool"), like OCaml does. This design would give programmers the ability to write common generic collections (trees, hash tables) and array functions like map and reduce, with minimal added complexity. I'm not saying that the decision to introduce typeclasses is bad—concepts/typeclasses may be inevitable anyway—it's just a bit surprising.


I don't like the possibility of nested handle() blocks.

This could make grasping the control flow in a function very cumbersome.

Everything else looks like a very good change to me and would draw me back in to Go.


I've thought about that before, as I participated in one of the bugs where we chewed on these various matters, and I came around to the conclusion that in general, understanding what is in your current scope is just the cost of doing business. Being scope-based is the minimal cognitive load you can hope for.

I also strongly suspect that in the general case, you're not going to see more than one handler at the top of the function. It will be a rare function that has a dozen of them in there, and either A: it needs to be refactored, and while I'm sorta sympathetic to the idea that we shouldn't hand programmers too much rope to hang themselves with, that argument has the problem that you end up removing pretty much everything from the language or B: it's the core inner complicated loop of a program and it just needs that much complexity, which still doesn't mean it's going to show up in every function.

It's not that dissimilar from one function getting nested four layers deep in try/catch statements; yes, it would be a bit confusing to disentangle what that would actually do (moreso, since the catch statements are conditional but the handlers are not), but the answer is simply not to do that unless you really need to, and thwack any coworker that tries it on the wrist during code review. It is impossible to make a Turing-complete language in which I can't express something incomprehensible in.


I actually like the explicit error handling idiom a lot.

(especially in Rust now that is has been made convenient with the ? operator and the failure crate which provides backtraces and wrapping).

I would prefer an extension of the `check` syntax instead of the standalone handle blocks.

For example:

    check someFn() 
    |> if err : = someFn(); err != nil { return err; }

    check someFn() handle {
      panic(err)
    } 
    |> if err := someFn(); err != nil { panic(err) }
So handle blocks are only for an individual expression.

This way the control flow is still linear and explicit (ignoring defer...) but the syntax is much more convenient.


Yeah, this is the obvious approach. Just make "check EXPR handle BLOCK" syntactic sugar for the "if err != nil" idiom. It's totally clear to the reader what's going on.


It's obvious because it's essentially just try/catch (as I'm sure you know) but misses the whole point of why they propose anything different.

The point of 'handle' is that it applies to ALL FUTURE calls to check in the function body. If we're going to consider alternatives we need to at least address the authors perspective (even if just to deny it).

I guess the proposal's central idea is that once you see an error, error handling is going to be focused on rolling back various stages of partial progress. It's the complement to how 'defer' incrementally accumulates guaranteed execution in the face of either a return or unwind.

That said, how common this issues is and if it's worth optimizing for is more up to debate. It would be nice to see a lexicon of well handled errors; for 90% most of what I do log and unwind is sufficient.

What do I know -- I actually like Java style checked/unchecked exceptions and think they'd work well for Go as the parent said. For what I'm up to it's often sufficient to just not leak partial progress via side effects. All the partial progress just evaporates into the GC. 'finally' or 'defer' takes care of most of the .close() or .unlock() calls.

The change I'd make to Java-style exceptions if I had a time machine is to make all exceptions types unchecked. Instead of looking at exception's types at all, enforce checks only if a function has 'throws' declared.

Calls to 'void close() throws IOException' would still need to be checked but a function containing 'throws new IOException()' isn't forced to declare throws or catch it. This lets API's be explicit and force due diligence but lets user code intelligently allow unwinding until it's caught at the appropriate level. Some sugar for code to acknowledge the checked error and unwind could be nice I suppose.

I think fattening out Go's err's to full exceptions with stacktraces is more pressing personally.


As someone that also likes them, I would point out that actually Java design team took their inspiration from CLU, Modula-3 and C++ regarding checked exceptions.


Im only familiar with Java and python, do CLU or Modula-3 do much differently? I heard C++ engineers generally hate exceptions, partially because they're unchecked.


The point being that CLU, Modula-3 and C++ had checked exceptions before Java was even a thing.

CLU never went beyond university projects, and Modula-3 research died after the labs went through several acquisitions, DEC => Compaq => HP.

As for C++, checked exceptions never gained much support, because their semantics are different from what Java adopted. Throwing an exception that isn't part of the list terminates the application and there were some other issues with them, so now as of C++17 they have been dropped from the standard after being deprecated in the previous ones.

In general there are two big communities in C++, the ones that turn everything on (RTTI, exceptions, STL, Boost...) and those that rather use C but get to use a C++ compiler instead.

So no we don't hate exceptions, it just depends on which side of the fence one is.


...you have proposed try/finally/except I guess (just put some closures in there where needed).

I think just using try/finally/except would be a step up over what has been proposed.


good idea. please add it to the feedback they're asking for :)


Yeah, the control flow of "handle" is the most questionable thing, from my standpoint. (Incidentally, I find the function-scoped nature of "defer" unfortunate too.)


handle/check looks like try/catch in reverse order with check/try limited to statements instead of code blocks. Not sure if that's an improvement or just different for the sake of being Go.


I have noticed that my Python code increasingly works like this via the optional else clause in its try/except mechanism.


I don't think the Go 2 generics proposal amount to type classes;

From what I inferred from the proposal, contracts are structurally typed rather than nominally such that you don't have a location at which you explicitly denote that you are implementing something. Rather, the fact that a type coincidentally has the right methods and such provided becomes the implementation / type class instance.

Also; I didn't check this in a detailed way, but does contracts as proposed have the coherence property / global uniqueness of instances? I would consider that a requirement to call a scheme for ad-hoc polymorphism on types "type classes". In other words, Idris does not have type classes but Haskell and Rust do.


I'm also surprised, manual dictionary passing ("scrap your typeclasses"-style) seems way simpler.


I dislike concepts because it seems to be a half measure; one that is already causing them pains in the design.

For example the issues with, implied constraints, infinite types, and parametric methods could be solved by using something more properly founded in type theory. Concepts are effectively refinement types[1] and a properly founded implementation from type theory would mitigate these issues.

[1] https://en.wikipedia.org/wiki/Refinement_type


What "properly founded implementation" would you suggest?

Concepts are not "effectively" refinement types.


Combative a bit are we?

I mean something properly founded in type theory, like say _refinement types_ as implemented in liquid haskell: https://www.microsoft.com/en-us/research/wp-content/uploads/... . Constraints of course are _not_ refinement types, but very close to a _subset_ of refinement types. Specifically they have deep similarity to Refined Type Classes from the above paper.

Having used Idris, LiquidHaskell, and F* to implement provable algorithms for some distributed systems I _don't_ think Go should go down that route. The techniques are simply not simple or clean enough for more general purpose languages. But having a type system which has a subset of those feature, and which stands on firmer mathematical firmament lets you reason about stuff like "which of these two programs is okay:

``` // OK type List(type T) struct { elem T next List(T) }

// NOT OK - Implies an infinite sequence of types as you follow .next pointers. type Infinite(type T) struct { next Infinite(Infinite(T)) }```

Which the go authors see as problematic:

> "It is unclear what the algorithm is for deciding which programs to accept and which to reject."

Idris certainly has no problem choosing which ones should and should not be used.


I don't think there is any lack of clarity about your List example; that is clearly forbidden. The example that is less clear is the one that generates a million types and then stops.

A similar issue arises in C++; the C++ standard says that there is an implementation defined limit on the total depth of recursive instantiations. Perhaps Go should do something similar.


C++'s method is certainly a practical way to limit expansion on inductively defined data types. I think a better way would be to construct a type system _opposite_ of most constructed today (which focus on the ability to express problems) and instead intentionally limit the types we are able to construct to those with "easy to use" and computationally efficient properties.

Consider this sample. We want to be able to define inductive types such as `List(T)`, but not `InfiniteList(T)` or `BigArray(T)`. While `BigArray(T)` is an interesting construct (it's effectively a dependent type)* it jumps past the "can I keep it in my head" smell test for me. As soon as a I have to reason deeply about what a type constructor does it just doesn't feel like Go to me.

So we want to be able to construct types which are inductive but can only calculate a single type. List{T} calculates one type, BigArray calculates _n_ types, and InfiniteList calculates an infinite number of types.

* In Idris one would write something like:

  data BigArray : (n : Nat) -> (l : List m) -> Type where
      Z l : Vect Z v
      n l : Vect n l


Although I don't think they called out refinement types as a possibility (I only skimmed near the end), I wouldn't be surprised if they quickly determined that tackling undecidability is not a goal of the Go type system. Especially since every attempt at refinement types (or even nontrivial constraints) ends up with a compiler that gets really slow, and fast compilations are one of the tenets of the language.

Go contracts as-written, or "accept a T that can do a thing and have a default implementation", is orthogonal to refinement types.

N.B. I'd really love to see refinement types break past the acceptable threshold of compilation time, since they're really neat and are easy to explain to people in terms of their usefuless.


My thoughts here are based on this section of the generics draft:

> We would like to understand better if it is feasible to allow any valid function body as a contract body.

A generic function to compute a type is _very much_ in the realm of both refinement and dependent types. The BigArray example (as mentioned in a sister thread) is just a dependent type.

> The hard part is defining precisely which generic function bodies are allowed by a given contract body. .. We are most uncertain about exactly what to allow in contract bodies, to make them as easy to read and write for users while still being sure the compiler can enforce them as limits on the implementation. That is, we are unsure about the exact algorithm to deduce the properties required for type-checking a generic function from a corresponding contract.

My proposal/goal here is to define a type system, or set of constructors which has nice, formally provable limits on what can be constructed. Those constructs should be both easy to grasp _and_ computationally simple. As you say _general_ refinement types are not suited for a compiler focused on speed; however, a subset can very well be.


Yes, I would already be more than happy with what CLU allowed for.


In some dependently typed languages they can be easily implemented via refinement types, where the “refinements” are the implementations of the typeclass requirements. Idris is a good example of this approach.

It has the advantage of making typeclasses first-class, and of enabling a lot of additional functionality without additional fundamental constructs. It is however very antithetical to what someone means when they say Go is minimalistic.


Overall really great that Go is addressing it's current biggest issues. I do think the argument for the check keyword instead of a ? operator like in Rust is quite weak. Mainly because a big advantage of the ? operator (apart from the brevity) is that it can be used in method calling chains. With the check keyword this is not possible. AFAICT the check keyword could be replaced for the ? operator in the current proposal to get this advantage, even while keeping the handle keyword.

Furthermore the following statement about rust error handling is simply not true:

> But Rust has no equivalent of handle: the convenience of the ? operator comes with the likely omission of proper handling.

That's because the ? operator does a min more than the code snippet in the draft design shows:

    if result.err != nil {
	    return result.err
    }
    use(result.value)

Instead it does the following:

    if result.err != nil {
	    return ResultErrorType.from(result.err)
    }
    use(result.value)
This means that a very common way to handle errors in Rust is to define your own Error type consumes other errors and adds context to them.


> I do think the argument for the check keyword instead of a ? operator like in Rust is quite weak. Mainly because a big advantage of the ? operator (apart from the brevity) is that it can be used in method calling chains.

In Dart (as in a couple of other languages) we have an `await` keyword for asynchrony. It has the same problem you describe and it really is very painful. Code like this is not that uncommon:

    await (await (await foo).bar).baz);
It sucks. I really wish the language team had gone with something postfix. For error-handling, I wouldn't be surprised if chaining like this was even more common.


Maybe I'm crazy but I'd much prefer that as:

   const temp = await foo;
   const bar = await temp.bar;
   const baz = await bar.baz;
But then again it looks like in this example we are returning promises from getters on objects which is something I would avoid.


Yeah, the guidance is usually to hoist the subexpressions out to variables like you do here. But I encounter it often enough that it feels like we are trying to paper over a poor language syntax.

> we are returning promises from getters on objects which is something I would avoid.

Getters are very common in Dart and it's idiomatic to use them even for properties that are asynchronous. (It's not like making it a method with an extra `()` really solves anything.) It's not as common to have nested chains of asynchronous properties, but they do happen, especially in tests.


Would it be better to use statements more? After all, this is a sequence of steps and it's doing a context switch between each step. That seems better written vertically, one per line.


Deeply chained method calls are a code smell on their own. But would it really be that much better if you wrote it as something like:

    foo..bar..baz;


I'm not sure about Dart, but in my experience it's rare that you await a function which returns an object with another awaitable in it.

Never mind something that nests these 2 levels deep.


fetch() + response.text() is usually two. add a user-defined third and you're there.


You're right, I oversimplified that. But having a "ResultErrorType" is not really context, not by itself. The interesting context would be additional fields recorded in that type.


That being the case, you should change the doc, because it's an important distinction. Speaking personally, I have found in my own Rust that the presence of the propagation operator does not, in fact, come with the "likely omission of proper handling"; to the contrary, it has allowed me to (properly) propagate errors that I can't meaningfully add context to, and to (properly) handle those errors that I can handle -- all with very readable code that doesn't involve error-prone boilerplate.


Yeah, the underhanded comment "the convenience of the ? operator comes with the likely omission of proper handling" is not only unnecessary but completely wrong in my experience. It strikes me as if the author hasn't actually ever used or investigated the language in any kind of depth, and took a wild guess that "making propagating errors easy means people don't actually deal with errors" or something.

In practice, writing `?` is no more or less automatic than `if foo, err := someFunc(bar); err != nil { return nil, err }`, but the difference is that it's immediately obvious when special error-handling logic has been added, by the simple fact of its existence.


This is something I don't enjoy about Rust's error handling. The context is added on a package level (or at least, the level at which the Error type is defined, which seems to be usually package level), and often quite far away from where the error was emitted. For all its ills, `errors.Wrap(err, "...")` puts the adding of context incredibly close to the place where the error is relevant.


x? will convert the error type for x, E1, to the error type for the calling function, E2, only when a conversion from E1 to E2 is defined. If its not possible to do this conversion without some contextual information, then this bare conversion can't be defined and x? will not compile. In that case, callers will have to do something like

    x.context(...)?
    x.map_err(|e| ...)?
    ...etc...
so they have to provide contextual information.


You can use map_err or the failure crate's with_context to add context.


What do you think are the ills of errors.Wrap?


> But Rust has no equivalent of handle: the convenience of the ? operator comes with the likely omission of proper handling.

Additionally, Rust programs heavily use "RAII", avoiding the need for `handle`.

But there is talk of adding support for `catch`.

See https://internals.rust-lang.org/t/pre-rfc-catching-functions...


Go doesn't chain things much. I like that it doesn't.

Verbosity can be a pain. But needlessly dense code is (imho) less readable. Having a single character that changes the entire context of a statement is not fun (I have the same problem with !). Reading a set of a dozen chained functions, some with ? and some without... that's not fun for anyone.


I get the feeling that the core Go team is rather against 1-character operators. Also, the Go community doesn't tend to do a lot of call chaining from a general stylistic standpoint.


I've written a lot of Go in the last few years. I stopped chaining calls after I realized how much of a pain it is to debug.

When you need to print / log a value from the middle of your call chain you have to take the chain apart. So just write it out to start with. It's not any less efficient.


I'd note that's only an issue if you don't have a good IDE.

In IntelliJ you can breakpoint on an expression that uses chaining, hold down the alt key and then click on the expression you want to evaluate. It turns into a hyperlink and can be viewed easily.


I think needing a heavy IDE to comfortably work in a language/paradigm is a negative


Given what Lisp, Smalltalk, Mesa/Cedar, Oberon, Delphi, VB introduced as productivity tooling, I see as negative programming as if my computer was still stuck in 80x25 green phosphor terminal.


That's a silly caricature. I could retort "I see as a negative if my development environment takes 40GB of RAM and half an hour to boot up" and it would be on the same level.

Debuggers are great but they're not always practical, for instance for debugging remote targets with limited connectivity. Actually in some situations you may end up having to buy expensive licenses for closed source software to run a debugger on certain hardware (although that might not be a concern for Go).

I'm also of the opinion that firing the debugger as soon as something unexpected happens instead of taking 10 seconds to reason about your code might lead you to miss a more general problem with your architecture as you focus solely on the symptoms, but that's of course a lot more subjective and your experience may vary. Personally I generally tend to use debuggers as a last recourse when I really can't make sense of what's going on through "static" analysis.


also, if you write a ton of log statements because that's how you debug, then when your production system blows up and you can't fire up the debugger, you have something to work on.


Or use modern instrumentation data that debuggers can work with like IntelliTrace, JMX, Flight Recorder, JTAG,....


I don't understand why it has to be an exclusive OR. Both printf-style logging and debuggers have their use. As I mentioned in my previous comment deploying some tracing solutions can be a complicated and expensive process, writing to an UART is however very cheap. Breaking or slowing down the execution of tasks interacting with other threads and resources can be impractical or simply hide the bug altogether if it's race-condition related, sometimes toggling a GPIO and putting an oscilloscope on the PIN beats any trace framework in terms of practicality. It will also work regardless of programing language, developing environment software and hardware used.

Saying that debuggers are for noobs and that they shouldn't be used is idiotic but arguing that not using them is doing it wrong and being stuck at the age of 80x25 terminals isn't much better. Wisdom is to use the right tool for the right job.


Amen


You don't need an IDE but it solves problems like "I don't want to chain my methods because my weak debugger can't handle it well".


Chained method calls being difficult to debug does not necessarily mean using a debugger. It can mean difficult to insert clarifying code in the middle of the chain: print statements, or additional value checks, and so on.


Chained methods also don't diff as clearly (the one liners anyway).


They do on GUI diff tools by using different colors.


It's still not as clear as just splitting across lines, plus in most tools you need to explicitly indicate you want to highlight word diffs.

But more abstractly speaking: are chained calls, from the POV of a human that didn't write it, or wrote it two months ago, more readable than a number of statements on multiple lines?


Generally speaking, Gophers don't interactively debug. There is some support for it, but it's historically been difficult and we have tended to adjust to logging everything and working out the problem from the logs.


That's OK but when using "logs" it takes more time and tries to find a complex bug than to search for it using an interactive debug tool.


I don't know about Go, but if I faced such an issue with rust, I would make a trait with one method:

fn print(self) -> Self { println!("{:?}", self); self }

You can do it nicely, so all you need is to add one `derive` to a type declaration, and then you will be able to insert .print() into the chain.

It is like print in lisps, might be really handy for the debug printing.


I'm not sure why you would prefer to (1) create a new trait, (2) create a custom derive for that trait, (3) derive that trait for all the types you want to debug over assigning to a variable and printing. It sounds like an over-engineered solution to a non-problem to me.


Because that's a tiny amount of work that saves you from making bloated junk variables all over.


The compiler removes them, and the naming of intermediates often makes the code more understandable.


By bloat I mean in the source code.

Naming intermediates can sometimes make things clearer, but also it can make things less clear. Often there is no good name, or the good name is just as long as the code that made it.


(1), (2) and (3) needs to be one once per lifetime. Or even less, if it was published on cargo.io. It is not an issue. And it doesn't seem as over-engineering for me, it would be like 10 lines of library code, with clear purpose. If it was 50 lines of code, with several methods for different cases, and with a lot of corner-cases, then it might be called over-engineering. But 10 lines of straight self-describing code with two line comment about suggested uses is not.

As to derive for each type, I already use `#[derive(Debug)]` for most of my types, to be able to print them with {:?} in format/println!. Though you are right in the sense, that it would be great to make such a method a part of Debug trait. But I personally have never needed it, so maybe complications of Debug trait would be the over-engineering you speak about.

From other hand I do not like variables. Variables are complications of code, and complications like this make it impossible to grasp code on the first sight. There are a lot of cases when you need a variable, and it is not apparent on the first sight which case is it. You need to stop and think. I do not like to think just to find that there was no reason to think. Intelligence is a constrained resource, you need to use it wisely.


In Scala, I use an implicit class to add a `tee` function. I kind of wish it were in the standard library so I wouldn't have to import it.


> Also, the Go community doesn't tend to do a lot of call chaining from a general stylistic standpoint.

There's also the small matter of multiple returns breaking the chain.


> chaining

Could you do this?

check (check func1()).func2()

Looks ugly but maybe they can clean it up further. In fact maybe they could just introduce something that looks like ? but works like check.


It looks like check can be used in calling chains.

That is, if F is string to string,error, and G is empty to string,error then

  s := check F(check G())
either assigns a string to s, or calls the handler.


I hope this doesn't make it in. I'd hate to see the inevitable

check (check A(check B(check C( ... ))))


I wouldn't. It's very explicitly saying, this is a series of calls that can fail.


These are all great starts, I'm glad the Go team is finally able to get some purchase around these slippery topics!

Unless implied constraints are severely limited, I don't think they're worth it. The constraints are part of the public interface of your generic type, I'm worried that we could end up with an accidental constraint or accidentally missing a constraint and having the downstream consumer rely on something that wasn't an intended behavior.

For Go 2, I would really like to see something done with `context`. Every other 'weird' aspect of the language I've gone through the typical stages-of-grief that many users go through ("that's weird/ugly" to "eh it's not that bad" to "actually I like this/it's worth it for these other effects"). But not context. From the first blog post it felt kinda gross and the feeling has only grown, which hasn't been helped by seeing it spread like an infection through libraries, polluting & doubling their api surface. (Please excuse the hyperbole.) I get it as a necessary evil in Go 1, but I really hope that it's effectively gone in Go 2.


How would a replacement for `context ` look?

Also, I don't share your sentiment. I actually find Context to be a quite nice, minimal interface. Having a ctx argument in a function makes it very clear that this function is interruptible. I certainly prefer this over introducing a bunch of new keywords to express the same.


The problem with context isn't necessarily the interface, it is that it is "viral".

If you need context somewhere along a call chain, it infects more than just the place you need it — you almost always have to add it upwards (so the needed site gets the right context) and downwards (if you want to support cancellation/timeout, which is usually the point of introducing a context).

Cancellation/timeout is important, so of course there have been discussions of adding context to io.Reader and io.Writer. But there's no elegant way to retrofit them without creating new interfaces that support a context argument.

Cancellation/timeout is arguably so core to the language that it should be an implicit part of the runtime, just like goroutines are. It would be trivial for the runtime to associate a context with a goroutine, and have functions for getting the "current" context at any given time. Erlang got this right, by allowing processes to be outright killed, but it might be too late to redesign Go to allow that.

I'm ignoring the key/value system that comes with the Context interface, because I think it's less core. It certainly seems less used than the other mechanisms. For example, Kubernetes, one of the largest Go codebases, doesn't use it.


> it infects more than just the place you need it — you almost always have to add it upwards and downwards

> Cancellation/timeout is arguably so core to the language that it should be an implicit part of the runtime

This is exactly what I mean, thank you for putting it so succinctly! I would go even further to claim that Go 1 goroutines are fundamentally incomplete due to missing this ability to control their behavior from the outside, and context is an attempt to paper over the missing functionality.

The key-value store built into them is also an attempt to paper over a large gap in the language: the absence of GLS (goroutine-local storage :P).

That context combines two ugly coverups of major deficiencies with the fact that using it means doubling almost all apis and infecting the whole ecosystem is why I dislike it so much.


Agreed. Thread-local storage has rightly been criticized as a bad idea, but Go'a contexts are arguably worse. It'd be easier if Go has something like Scala's implicits.


As a type theorist and programming language developer, I’ll admit that’s a fairly reasonable design for generics.

I’m still a bit disappointed by the restrictions: “contracts” (structural typeclasses?) are specified in a strange “declaration follows use” style when they could be declared much like interfaces; there’s no “higher-level abstraction”—specifically, higher-kinded polymorphism (for more code reuse) and higher-rank quantification (for more information hiding and safer APIs); and methods can’t take type parameters, which I’d need to think about, but I’m fairly certain implies that you can’t even safely encode higher-rank and existential quantification, which you can in various other OOP languages like C#.

Some of these restrictions can be lifted in the future, but my intuition is that some features are going to be harder to add later than considering them up front. I am happy that they’re not including variance now, but I feel like it’ll be much-requested and then complicate the whole system for little benefit when it’s finally added.


They are truly looking for feedback on these proposals. If you have thoughts on how could change to be potentially more compatible for future additions, consider doing a blog post write up and submitting it to them.


Thanks for the comments. We tried for some time to declare contracts like interfaces, but there are a lot of operations in Go that can not be expressed in interfaces, such as operators, use in the range statement, etc. It didn't seem that we could omit those, so we tried to design interface-like approaches for how to describe them. The complexity steadily increased, so we bailed out to what we have now. We don't know whether this is is OK. We'll need more experience working with it.

I'm not sure that Go is a language in which higher-level abstraction is appropriate. After all, one of the goals of the language is simplicity, even if it means in some cases having to write more code. There are people right here in this discussion arguing that contracts adds too much complexity; higher order polymorphism would face even more push back.


Yeah, I’m not sure higher-level abstraction is right for Go either—like I said, I do think the tradeoffs in the generics design are ultimately reasonable, given the design constraints.

I guess it’s those very constraints that get to me. This is going to sound harsh, but I believe the goal of “simplicity” is already very nearly a lost cause for Go.

The language is large and lacks any underlying core theory, with many features baked in or special-cased in the name of simplicity that would have been obviated by using more general constructs in the first place; the language is not simple, it’s easy for a certain subset of programmers. Adding convenience features here and there and punting on anything that seems complex by making it an intrinsic has led to a situation where it’s increasingly difficult to add new features, leading to more ad-hoc solutions that are Go-specific. This leads to the same problem as languages like C++: by using Go, developers must memorise details that are not transferable to other languages—it’s artificial complexity.

A major source of this complexity is inconsistency. There are few rules, but many exceptions—when the opposite is more desirable. There are tuple assignments but no tuples. There are generic types but no generics (yet). There are protocols for allocation, length, capacity, copying, iteration, and operators for built-in containers, but no way to implement them for user-defined types—the very problem you describe with contracts. Adding a package dependency is dead-easy, but managing third-party dependency versions is ignored. Error handling is repetitive and error-prone, which could be solved with sum types and tools for abstracting over them, but the draft design for error handling just adds yet another syntactic special case (check/handle) without solving the semantic problem. Failing to reserve space in the grammar and semantics for features like generics up-front leads to further proliferation of special cases.

Virtually all of the design decisions are very conservative—there’s relatively little about the language that wasn’t available in programming language research 40 years ago.

I don’t mind writing a bit of extra code when in exchange I get more of something like performance or safety wrt memory, types, race conditions, side effects, and so on. But Go’s lack of expressiveness, on the whole, means it doesn’t provide commensurate value for greater volume of code. But it could! None of these issues is insurmountable, and the quality of the implementation and toolchain is an excellent starting point for a great language. That’s why I care about it and want to see it grow—it has the immense opportunity to help usher in a new era of programming by bringing new(er) ideas to the mainstream.


Old and busted:

    func printSum(a, b string) error {
        x, err := strconv.Atoi(a)
	if err != nil {
		return err
	}
	y, err := strconv.Atoi(b)
	if err != nil {
		return err
	}
	fmt.Println("result:", x + y)
	return nil
    }
New hotness:

    func printSum(a, b string) error {
	x := check strconv.Atoi(a)
	y := check strconv.Atoi(b)
	fmt.Println("result:", x + y)
	return nil
    }


Why the handler function over something like Rust's propagation operator? [0]

I see that it adds significant flexibility, but at the cost of verbosity and a new keyword that largely conflicts with one of Go's niches right now, web servers. And that flexibility seems likely to go unused in my experience. I would be shocked to see anything other than `return err` inside that block.

Sure errors are values and all that, and maybe I'm just working on the wrong codebases or following worst practices. But generally I see three approaches to errors in Go code, in order of frequency:

1. Blindly propagate down the stack as seen here. "It's not my problem- er, I mean, calling code will have a better idea of what to do here!"

2. Handle a specific type of error and silently recover, which does not typically need a construct like this in the first place.

3. Silently swallow errors, driving future maintainers nuts.

This seems to only really help #1, but `return thingThatCanError()?.OtherThing()?` can handle that just as well.

[0]: https://doc.rust-lang.org/book/second-edition/ch09-02-recove...


I believe this is explicitly discussed in the doc: https://go.googlesource.com/proposal/+/master/design/go2draf...


Their discussion of Rust's error handling is woefully incomplete. They seem to think that the only thing that exists is the ? operator, which will return your error early, or a match statement, which is verbose. It misses some important aspects of rust's error handling.

1. There is a typed conversion between one type of error and another, which can keep context. 2. There are traits which can extend error types. 3. Backtraces can optionally be kept on (at a performance hit during errors of course).

I have a lot of code written that looks like this, using the excellent "failure" crate, which allows optionally to add context to any error:

file.seek(SeekFrom::Start(offset)) .context(format!("failed to seek to {} in file", offset))?;

You get explicit errors, minimal code, optional error context. It's pretty perfect.


I got turned off of failure when a reddit user reported that it slowed down his program by 300x and the author of failure's response was that the original poster had misused failure.

https://www.reddit.com/r/rust/comments/7te8si/personal_exper...


They said the OP misused it largely because the documentation was insufficient, which is a reasonable response. There are different types of error object, and the one used was a poor fit for common, expected errors.

What would you have wanted them to do instead?


That's unfortunate, as I think you're reading into it to see an attitude that is not there. In fact that response makes me more interested in using it.

The author starts by identifying the use of the wrong type. They then immediately blame it on their own documentation in the very next sentence.


That snippet looks like it'd be very helpful, I'm going to make a note to check out that library. Thanks!


"Their discussion of Rust's error handling is woefully incomplete."

These are clearly not intended to be dissertations on the topic.

However, i don't think anything you said changes their discussion, actually.


> However, i don't think anything you said changes their discussion, actually.

How does it not? It seems to me that OP revealed an alternative method of error handling (which is very cool I might add), exactly in response to Google's discussion on Rust's error handling.

If the discussion doesn't address this, then it seems reasonable to say it's incomplete, as its omission makes it appear they're not debating Rust at its strongest.

edit: paper->discussion


Because none of the things said actually refute any of the points in the Google discussion? At all?

I also definitely don't feel a failure to go off and survey every random library that exists for rust makes a discussion "woefully incomplete" or that it isn't debating rust at it's strongest. The same treatment is given to all languages, including rust, go, etc.

The discussion is also about language features, and it discusses the language features.


In your new hotness, I believe that handler is already implicit, so you can just leave it off.

What I am still curious about, and can't see whether or not is possible from the current draft (possible I've just skimmed too hard) is whether printSum can become:

    func printSum(a, b string) error {
        fmt.Println("result:",
            check strconv.Atoi(a) +
            check strconv.Atoi(b))
        return nil
    }
Possibly with an addition paren around the check.

(I'm just using this as an example; in this case I probably would assign x & y on separate lines since in this case this saves no lines, but there are other cases where I've wanted to be able to call a function that has an error but avoid the 4-line dance to unpack it.)


It can, and this is mentioned in the error chain design doc[1].

[1]: https://go.googlesource.com/proposal/+/master/design/go2draf...


Yes, what you write should work with the current draft.


Cool. It isn't something I'd want to abuse, but there are places in my code where it would clean things up nicely, too.


That should be possible because check is an expression


Serious question: why is the error return from Println not handled?


In general, catching errors when there is simply nothing useful to do with them isn't that useful. If the world is so broken that printing isn't working, it probably isn't the print failing that you care about. Even the errcheck linter, which I use in all my serious code, doesn't make you check the error from basic print statements like that. (Fprint it makes you check. But not just Print.)


Then why does Print return an error code at all?


So if the printing doesn't work, you can see what's going on?


But all the comments in response to the question are saying there's no point in checking the return code of print, because if it's broken, you can't print the error code you got anyway (or it's never broken in practice).

I think there's a deeper point the question asker was making here - it's easy to say "everyone should just check error codes" but in practice it never works out that way. People always ignore/swallow some of them, or propagate them in such a way that they lose detail e.g. a detailed error code from a subsystem is mapped to a generic error code higher up the abstraction stack. Print failing is a classic example of something most wouldn't care to check, but perhaps there's a good reason for it. With exceptions in the common case, where nobody checked, an exception would propagate up the stack until it was either caught and handled in a generic way ("subsystem X failed") or simply caught by the runtime.


"But all the comments in response to the question are saying there's no point in checking the return code of print"

That's not quite what I said, and the difference is important. I said there's no point catching an error when you have nothing useful to do with it.

It so happens that the basic "print to standard out" is the extreme outlier of a function where you have nothing very useful to do with it in the general case. What are you going to do, print an error? Are you going to log it? Probably not, because if you have a log in the first place you probably aren't seriously using printing.

There are exceptions where you might care, such as a command-line program where printing is the major case you care about, but it so happens that the printing functions often have nothing useful to do with errors.

It is not the only case. I often don't log things terribly strongly during the time a socket is being shut down at some point in a protocol where both sides have agreed to shut the connection down, for instance. Or if I have a socket connection and the underlying connection fails, I don't need an individual log message for every concrete manifestation of that fact necessarily. If logging fails... uhh... what exactly is one supposed to do with that information in the general case?

So even though I use errcheck like I said on all my code, there are places where I throw the error away on purpose. It's clearly indicated in the code, though, rather than being implicit, since errcheck requires a "_ = erroringThing()" line of code. Except for printing to stdout, because as a special case, that's just too common a case to worry about, even when one is careful enough to use a linter for everything else.


There's no reason to care about print being broken, unless there's a reason to care about print being broken. Both cases need to be supported. The developer ignoring the error value if they don't care is perfectly rational.


I'm having a hard time making it fail. Even with Printf, in the output it tells you what you did wrong instead of returning an error.

In fact, the only _print function I could get to fail was Fprintf by sending an already closed file. I suppose if it was somehow not allowed to write to stdout the other functions might fail. But I'm not sure.

    _, err := fmt.Printf("Hello %d", "world")
    fmt.Println(err)
    
    >> Hello %!d(string=world)
    >> <nil>


Yes, the various fmt.Print functions only return an error if Write on the underlying Writer returns an error.


Here you go: https://play.golang.org/p/0vGdejrjm5H

To reproduce the key lines here:

    os.Stdout.Close()
    _, err := fmt.Println("foo")
You can also close stdout from outside the program, use ptrace to cause the write call to fail, or a number of other things.


There are two main reasons:

1. Many developers learn the print funcs before they really start paying attention to errors as return values, and so never think to check what it returns.

2. If println is failing, things are going very wrong. It would probably be more ergonomic to just have it panic in most cases, because if it errors out then what are you honestly going to do to recover gracefully? Just let your surrounding infrastructure handle it, whether that's restarting the process, starting a new container somewhere else on the cluster, or whatever.


If stdout is unavailable, should that really be fatal? Perhaps it should be considered an unusual equivalent to piping to /dev/null.


Which is the current practice of ignoring errors from `fmt.Println()`.


In this particular example there is no good reason, it’d be better to end with:

    return fmt.Println()


Only seems useful when you have a function and want every error handled the exact same way, and don't have any requirement to add context the way https://github.com/pkg/errors does.


Read the design draft (https://go.googlesource.com/proposal/+/master/design/go2draf...). The issues you raise are addressed.


Not quite. You'd need to write new `handle err { ... }` blocks for every new context, like how they have one in the for loop. In effect you're just moving the handling of the error away from the place it occurred, which is not optimal. As someone who writes Go every day I don't see this strategy effectively cutting down on boilerplate core or adding clarity.


While this is true, more often than not, in lower-level functions you want errors to bubble up. For example, in most Node.js code, the first line in each callback raises the error to the callback, or likewise throws up in a promise/async chain.

Beyond this, there's no reason you can't add additional context and/or wrap the original error before returning your own error.

With this additional syntax and the addition of generics, I'm far more likely to take up go. Up to this point, I'd only toyed with it a little, because it just felt so cumbersome to actually get stuff done compared to node. Beyond that, dealing with web-ui, if I'm going to take on a cognitive disconnect for a back-end, I want as little friction as possible.


I'm in the same boat as you. Error handling and code generation (to support the lack of generics) in go always left a poor impression on me. I ended up going back to node for work, f# or typescript for personal projects.

These two changes alone are enough to get me to pick go back up.


My point is that using this to wrap context around errors is even more verbose than what's available/common today, since you either settle for one wrap for the whole function or you have to define multiple `handle` blocks.


Does the "old" way no longer work?


To be frank, I was for a long time on the camp that Generics is a much-needed feature of Go. Then, this post happened: https://news.ycombinator.com/item?id=17294548. The author made a proof-of-concept language "fo" on top of go with generics support. I was immediately thrilled. Credit to the author, it was a great effort.

But then, after seeing the examples, e.g. https://github.com/albrow/fo/tree/master/examples/, I could see the picture of what the current codebase would become after its introduced. Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning.

Few months back, I tried to raise a PR to docker. It was a big codebase and I was new to Golang, but I was up and writing code in an hour. Compared to a half-as-useful Java codebase where I have to carry a dictionary first to learn 200 new english words for complicated abstractions and spend a month to get "assimilated", this was absolute bliss. But yes, inside the codebase, having to write a copy-pasted function to filter an array was definitely annoying. One cannot have both I guess?

I believe Golang has two different kinds of productivity. Productivity within the project goes up quite a lot with generics. And then, everyone tends to immediately treat every problem as "let's create a mini language to solve this problem". The second type of productivity - which is getting more and more important - is across projects and codebases. And that gets thrown out of the window with generics.

I don't know whether generics is a good idea anymore. If there was an option to undo my vote in the survey...


All I want is typesafe efficient higher order list operations like map, filter, reduce, flatmap, zip, takeUntil, etc. I don't care if Go puts them into the stdlib and only allows these operators to use special internal generics to operate. If Go had a robust higher order std library like this, I would stop asking for generics. Again, must be typesafe and zero overhead.


One of the main motivations for adding generics is that we could then put typesafe algorithms and data structures in the standard library.


And make the standard library unmagical? One of the things I've always disliked about Go is how stdlib provides magic facilities unavailable to user code.


I agree, but there's not much, and it's usually for decent reasons:

* the "time" package hooks into the Go scheduler.

* the "reflect" package needs type information out of the runtime.

* the "net" package hooks into the Go scheduler a bit. But way less than it used to. I think you might even be able to do this all yourself outside of std nowadays.

What else?

Our rough plan going forward is to actually move a bunch of the stdlib elsewhere but ship copies/caches of tagged versions with Go releases. So maybe "encoding/json" is really an alias for "golang.org/std/encoding/json" and Go 1.13.0 ships with version 1.13.0 of golang.org/std/encoding/json. But if you need a fix, feature, or optimization sooner than 6 months when 1.14.0 comes out, you can update earlier.


My guess that quotemstr meant builtins: append, copy, range "protocol", etc.


There are many examples of things builtins can do which user-code can't:

1. The constructs like "x <- chan" and "x, ok <- chan" (and map indexing and so on) are only available to builtin types.

It's impossible to write a thing like "x, ok <- chan" where it either panics or doesn't depending on if the caller wanted two or one return values.

2. generic types, like 'map' and 'chan'.

3. 'len' can't be implemented for a user type, e.g. "len(myCustomWorkQueue)" doesn't work.

4. range working without going through weird hoops (like returning a channel and keeping a goroutine around to feed it)

5. Reserving large blocks of memory up front (e.g. as 'make' can do for slices and maps) in an efficient manner (e.g. make(myCustomWorkQueue, 0, 20) doesn't work)


You're talking about the language. He's talking about the standard library.


We're both talking about both. The core of my complaint is that the language privileges parts of the standard library.


> privileges parts of the standard library

It really doesn't. Maps and channels are not part of the stdlib. They're part of the language, but not the library.

Similarly, range is part of the language, not the stdlib.

If the stdlib was privileged, "container/*" in it would be generic, but it's not.


All right, but my point stands. Maps and channels should be part of stdlib, and they should rely on primitives available to user code too.


Maps and channels are not part of the standard library. How you know that is, you can replace the entire standard library, using none of it, and still be programming in Go with maps and channels.


Parent wrote "should be" not "are".


Such primitives would bring a lot of complexity which is something Go tried to avoid.

Consider that make, map, chan etc are just language features/specs just like int64, string, func, main etc


Why should string be fundamental?


Nobody said it "should" be. It simply is, in Go, the language we're discussing. If you nerdsnipe an HN thread with an argument that there's no difference between a language, its runtime, and its standard library, people are going to pile on to point out that's not true.


Could, sure. Should? I think you are asserting facts that are not in evidence.


I'm a bit confused, and I'd like to understand your position better. I can see why arrays, structs/tuples, or [tagged] unions are usually primitives – they sort of express different fundamental layouts of data in memory, so they need support from the runtime/compiler. In the case of Go, channels also make sense as a primitive – as I understand it, `select` requires scheduler support.

But higher level data structures like lists, sets, maps etc. are all implemented using those primitives under the hood. So, especially in a "systems language", I would sort of expect them to just be libraries in the stdlib, not language features[1]. But if you think otherwise (or see a gap in my reasoning) I'd love to hear why!

[1] Except maybe some syntactic sugar for list/dict/set literals.


"Should" implies some normative assignment of value.

> But higher level data structures like lists, sets, maps etc. are all implemented using those primitives under the hood. So, especially in a "systems language", I would sort of expect them to just be libraries in the stdlib, not language features[1].

Maybe! But if those higher order structures are closer to language features than libraries -- and, specifically to this conversation, if they leverage parts of the runtime that aren't available to mere mortals -- is it a strictly worse outcome? I don't know. Some evidence might suggest yes. But I think it's at best arguable.


I think I understand, thanks for explaining.


> But higher level data structures like lists, sets, maps etc. are all implemented using those primitives under the hood

In Go you don't have generics as a primitive with which to build those things.

As a result, you don't have the building blocks for anyone to build a usable data structure unless it's baked into the core of the language.


I'm aware of that, and I read the generics proposal linked. I commented here because I wanted to respond to this:

> Should [be just libraries]? I think you are asserting facts that are not in evidence.

I tried to describe why I feel that go's collections should be libraries and not builtins, and hopefully understand why GP seemed like they were questioning that idea.


What's magical about the stdlib?


Maybe they meant the language, which has some generic functions like append or delete - instead of those functions being attached to a data type or accepting any type, they only act on built-in types. I doubt any of that would change though given the stated goal of remaining compatible in most cases.


There are a handful of packages (time and some stuff in os for example) that are tied into the runtime and can't be rewritten outside the standard library. It's not a huge amount, and some of it can probably be fixed in the future.


What if generics were just implemented internally in Go? IMO, there are good use cases, like generic numbers (int32, int64, etc.), list processing, etc. There's already precedence for this with map[A]B, make(), etc.


What changed? Why is the Go team finally considering this now, when it has been requested for these exact reasons by many credible developers since Go's beginning?

EDIT: Yes, something has changed. They always said "no, but maybe later" and now it's "ok how about now". That is a big change and I am just asking what's the reason for it.


A year ago we said it was time to move toward Go 2: https://blog.golang.org/toward-go2 . This is another step in that direction.


Then I'm a year late in asking why the Go team is now deciding to say yes to something they said no to for over half a decade.


The blog post explains that.


Nothing changed. It wasn't off the table to begin with. See: https://golang.org/doc/faq#generics


The question of the parent boils down to "what put it on the menu".

So, yeah, it was always "on the table" in some hazy, we'll see later, way, but not like that.


I think a good faith reading of the circumstance would lead one to conclude that the passage of time, and other, higher-priority and lower-cost-of-implementation issues being addressed, have put these issues on the menu.


Likely because they have now figured out implementations of these features which they feel work well with the rest of the language design.


Because now is later than when they said later?


The reason for saying "no, but maybe later" originally was that it was always a desired piece of functionality. I believe the word "inevitable" was even tossed around. The problem was that they didn't like the existing implementations of generics in other languages and also didn't know how they would improve it. I haven't had time to look at the details, so I'm not sure what they actually did, but I think they basically came up with a strategy that they are mostly happy with.

You might ask, why not pick a solution earlier and then improve it over time? However with a popular language it's not that easy. People are writing lots of code. If you start early, you can paint yourself in a corner and end up with a system that you really can't improve very much because it introduces incompatible changes. Note the extremely painful (and drawn out!) incompatible changes with Perl 6 and Python 3. If you make the wrong choice early, you might still take over a decade to find an opportunity to replace it. And since the state of the art in language design moves very slowly, it took a long time before they could see something that they felt they wouldn't regret choosing.


I highly recommend reading https://go.googlesource.com/proposal/+/master/design/go2draf...

This is part of the material linked to from this thread. It answers all your questions. You will see that some of the people who are directly replying to you are very much involved in this effort.

Edit: I tried editing the previous message but HN actually automatically made a reply instead....


Adding generics is a major change (e.g. it would restructure the stdlib), and so it would only belong in a major-version bump of the language.


Though the authors have clarified that the "2" in "Go 2" is just for marketing purposes, and that the actual release would still be "Go 1.x", and shouldn't break any existing code (so it won't restructure the stdlib in any major way, unless they want to do a huge mass-deprecation).


So Go is following the exact same path as Java.


Inevitably.

Also inevitably, Java will get value types (structs) at some point.

Just like with Go Generics, people will complain the feature is bolted on. After some time it is just one of the many quirks of a complex language.

I think it was Stroustrup who said: there are two kinds of programming languages, the ones people hate and the ones nobody uses.


And Java had quite a few good examples to take on like Modula-3 and Eiffel, but alas they wanted something for average Joe, kind of sounds familiar.


The difference is that Go could have learned from Java.


I think because it would break backward compatability in v1, and not doing that was the Most Important Thing in all 1.xx releases.

v2 is allowed to Break Stuff. So obviously they're going to consider the thing that people have shouted for most in Go... generics and error handling (and package management but they found a way of doing that earlier).


> What changed? Why is the Go team finally considering this now, when it has been requested for these exact reasons by many credible developers since Go's beginning?

Simply, Go has probably accumulated all the developers it is going to accumulate without something drastically changing.

At this point, I can't think of any programmers around me who now want to learn Go.


I basically agree. I still think, while we're at it, making Generics a general feature of the language would probably be easier than adding all kinds of special cases.

But yeah, if Go just added a generic map/filter mechanism (Python's list comprehensions (in turn inherited from Haskell) look very nice), that would cover about %75 of the use cases I want Generics for. Add a generic mechanism to use the range operator with user-defined types, and we're at about 95%.


> Python's list comprehensions (in turn inherited from Haskell) look very nice

Do they? I find them hard to read and hard to chain. I tend to prefer basic map, filter, reduce methods


Mmmh, that is a good point. I never used nested list comprehensions, to be honest, and I have not used Python in a couple of years. For non-nested cases, they are very nice, though.

OTOH, I never used nested map/remove-if constructs in Lisp, because that, too, can get rather annoying to read.


I agree with this sentiment completely. Of course, if the only way to get these wonderful list operations is generics, then I guess I want generics, too.


You might be interested in my project: https://github.com/lukechampine/ply

The idea is that when people say "generics", they usually just mean map/filter/reduce. So ply just bakes those into the language directly instead of adding new generic programming facilities.


Those are the things that I don't want to see in Go, honestly.


I'm curious: why not?


Generally, chaining things in the style that encourages that makes it harder to debug the code, harder to step through it in my head or on paper, and harder to compile to performant code. I'd rather see loops.


On the other hand, you're less likely to need to debug those things because they've been implemented once, correctly. A manually-written filter, map, etc. will pretty much always have a higher probability of containing a bug. And in languages that have these features, I've never found it difficult to debug. What issues have you had in the past?


I've almost never had issues with manually written filters/maps -- at least, not with the iteration itself. On the other hand, having to untangle stack traces of closures of closures that have been passed through closures does slow me down when the function that's being mapped is buggy.


I guess I've never run into issues where a map or filter slowed me down for debugging, most languages will pretty easily show you where the error occurred. And if you're `map`ping and `filter`ing with pure functions, debugging is pretty simple because you just need the input that went into the buggy function. A good compiler will also optimize the closures into something you'd write by hand in Go. Overall I'm glad to see Go get these nice tools for more easily working with collections.


I've just never run into places where that style improved readability, performance, or debuggability. Outside of the Haskell world, I've usually found myself preferring loops for ease of following the code.


Thanks for your answer. I find it very interesting how different people come to different conclusions about these basic approaches to writing programs. Imho, as always, there's no one true answer and understanding where everyone comes from is very important in understanding this divergence of thought.

In my work I don't think I could live without these higher level abstractions anymore. It enables me to write clear and concise code that helps me keeping accidental complexity under control so that I can focus on the essence of what I'm trying to codify. Performance on the other hand is not a primary concern to me, given it stays above some acceptable threshold.

I appreciate how different tasks would require different approaches. I'm interested in which direction go goes (no pun intended).


At that point why not just add generics...


Lisp tends to build abstractions, too, and it also does that via macros, which are even harder to mentally parse unless you have a habit.

I'd say that to attain simplicity and clarity, you have to limit the use of abstraction. OTOH there's no way to avoid abstraction at all; programming is all about abstraction. If the means of abstraction are not powerful enough, copy-paste programming and boilerplate proliferate, making code less maintainable and harder to understand.


In my opinion, implementing these higher order functions with the help of one powerful abstraction (generics) is much simpler than having to deal with a bunch of special cased weaker abstractions (magic map, magic filter, magic reduce, magic...).


So you want a Lisp with other syntax, right?


That's about right. And gradual types, multi methods, generics and tight integration with C++: [0].

Dylan was too long winded for my taste, as are most modern attempts.

[0] https://github.com/codr4life/snabl


You say it like it's a bad thing.


I was just curious, because those functionionality wished is in nearly every list processor, no mather what syntax used. And if not you macro that away, right?


Some abstractions have stood the test of time - map, reduce, sort, those are worth having and various other collection types would be nice to see.

I agree the introduction of generics is a big worry not because of what good programmers will do with it, but because of what people more enamoured with pretty abstractions than code that does work will do with it. I don’t particularly want to live inside someone else’s creation for weeks just to understand it. The existing culture of no nonsense and minimal abstraction should help police that though.

I’m not that keen on the syntax of contracts either, but we’ll see what comes out at the end, that’s not so important - the idea of contracts looks like a good one in theory at least, but it feels like they could have used interfaces instead somehow (I’m probably missing something there).

The errors proposal is interesting as it looks like try/catch at first glance but importantly doesn’t leak that error across function/lib boundaries as exceptions do, so it’s a natural extension of the existing approach.


> I’m not that keen on the syntax of contracts either, but we’ll see what comes out at the end, that’s not so important - the idea of contracts looks like a good one in theory at least, but it feels like they could have used interfaces instead somehow

I'm not sure I understand the need for contracts really. If a function with type argument is compiled, the caller should use specific types to call it, so the compiler can determine whether said type has the required methods used in the generic function. Therefore the compiler knows everything it needs to fail with a compile-time error without any extra ugly "contract" syntax. What am I missing? Are some types only resolved at runtime?


That's exactly what C++ is doing for their template. The problem here is that if you want to use some generic functions, then you need to look into its implementation to see what kinds of requirements are needed for its type. And if you failed to meet the requirement, you'll very likely get several hundred lines of cryptic compiler error message; C++ compilers have been trying very hard to make the error message readable but it's still an awful experience. Also, an implementer now needs to be extra careful when they want to update the function since a single line of seemingly innocuous change may bring breakages for downstream users. Of course this can be prevented by carefully written exhaustive tests, but then what's the point of not having contract?

So the lessons from a number of PL implementations show that this kind of information needs to be explicitly encoded in codes, and that's the whole point of having "contract", "type class", "concept" or whatever else it's called. Compilers may be able to infer technical details of type, but at least for now it's not sophisticated enough to infer the original intention of the writer.


Sorry, still don't get it. Extra contract code needs to be written just to prevent "cryptic" compile-time errors? It's just a type mismatch/error like many others to me. Cryptic error messages can be made less confusing by improving the output.

Since Go doesn't have standalone object files / binary libraries, it's not going to produce confusing link-time errors for this.


The problem is you don't know if the error is in the type-parameter or if it's in the generic class. Take this very simple c++ program as example, it has 3 potential sources of error and it is not clear from the contract of the foo-function which one is the actual error.

    template <typename T>
    void foo(T bar)
    {
        bar.hardToSpell();        //1. Error here?
    }
    
    struct Bar() {
        void ahrdotspell();        //2. Error here?
    }

    Bar buzz;
    foo(buzz);                    //3. Error here?

In this particular case it's easy to deduce that something is misspelled and error 2 is the real error but the compiler is probably going to tell you it is error 1. And once you get into more complex programs the problem might be that the Bar class is not simply supported by the generic function or that the generic class is implemented incorrectly.

If you instead could add an interface constraint on T (like for example C#/Java/Typescript allows) it would be clear from the contract that this interface defines all the required functions on T if Bar doesn't implement those it is unambiguous that the implementation of the interface is not there.


> If you instead could add an interface constraint on T (like for example C#/Java/Typescript allows) it would be clear from the contract that this interface defines all the required functions on

Then you just have a 4th possible spot for an error. I don‘t see an essential difference between considering the function definition as the reference and having a contract (where typos are possible as well), just the latter being cumbersome and superfluous.


Assuming the interface definition is correct, it gives you one degree less of freedom. Error #1 is always evaluated vs the interface. Error #2 is also always evaluated vs the interface. Error #3 is evaluated vs the type constraint.

So each of the 3 potential errors can always be pinpointed exactly, as opposed to my example where it could be any 3 (or all 3 simultaneously) of them because "T having a hardToSpell-method" is not part of any public interface description, it's something deep in the implementation of the foo-function (in my example it's not very deep but in real code it usually is).


Typically you would have the template instantiated for multiple classes (…or there wouldn't be a need for templates). Each class will have its own definition of the functions used on the template argument. If the function definition is the reference, which one?


In this case the mismatch is between a type argument and the way that a type parameter is used, so it's not a type mismatch/error, it's a meta-type mismatch error. You are suggesting that the meta-type be inferred from the function (and any functions that it calls) rather than being explicitly stated. That is doable--C++ does it--but it means that the user has to understand the requirements of an implicitly inferred meta-type. The Go design draft suggests instead that the user be required to understand an explicitly stated contract.


As I mentioned, it's not just for avoiding cryptic error messages, though I am not sure if we're even referring the same thing as this is a completely different one from linker errors. Believe me, improving the output doesn't help that much. Hundreds of C++ compiler developer have been trying that for tens of years. None of them has succeed to the level of making a novice understands what's going on when they give sort(list) to the compiler.

To iterate my point, it serves as a "contract" between users and the code author. Without this, any user of a generic function will implicitly depend on its implementation, which creates tight coupling. Cryptic error messages or unintentional breakage are just undesired side effects from not having this clear boundary. So, the interface of generic functions needs to be explained to its users anyway; then why not making this understandable to compiler and prevent users from making such mistakes?


Go does have binary packages, it just happens most devs always build from source.


I NEED generics to do any quantitative code. I evaluated Go for a project, and completely ruled it out because of this. You're unable to write an algorithm for float 32s and 64s, scalars vs vectors vs matrices, etc.

There might be a bias in the commentary because the membership of the Go community has self selected to just be those for whom the current constraints are feasible.


If you need a generic algorithm where the possible types are very finite (e.g. just float32 and float64, or just NxM matrices for N,M=1,...,4) you might be able to `go generate` the concrete implementations from a text template.


Text generation is fragile, error prone, does not play well with other type-based tooling, is difficult to upgrade, adds unnecessary busywork to the programmer's day.

When Java's limited, half-crippled version of generics showed up, sourcecode generation died. Pretty much overnight. Why doesn't anyone do it? Because it's a terrible experience compared to having types.


You won't be able to do numeric stuff with these proposals as well. You would still be missing operator overloading and parameter of integer types, which are the basis of any generic numeric code.


True about no operator overloading, but that's just syntactical sugar for functions.

I believe you can get integer parameter types by using something like the `convertible` contract seen here https://go.googlesource.com/proposal/+/master/design/go2draf... (unless I'm mistaking what you are saying)


I believe you are mistaking. C++ allows things like std::array<int, 3>, where 3 is parameter of integer types.


And after that evaluation what did you use?


Fortran does ok without generics.


So does C but you don't expect the same things from a language from the 70's. I don't think C or Fortran would be very popular if they were released today.


I won't argue with that, but I'd just like to say that the newest Fortran versions actually make a quite modern language.

Most of its quirks are related with compatibility with older versions. If a new version without that handicap was released today, it may look nicer than most people think.


Sure, I'm not denying that generics are useful sometimes. It's just that people sometimes exaggerate how big of a problem it is not to have them.



>In Fortran 90, user-defined procedures can be placed in generic interface blocks. This allows the procedures to be referenced using the generic name of the block.

I don't know Fortran, but I'm fairly sure that you just googled "fortran generics" and pasted the first result without checking it :)


Fortran got simplified generics in 2003, I pasted the wrong link by mistake.

https://cug.org/5-publications/proceedings_attendee_lists/20...

https://software.intel.com/en-us/node/692037


Fair enough, I didn't know about that feature. However, people were using Fortran successfully for writing huge numeric libraries waaaaayyy before that.


I was successfully using Z80, 68000 and 80x86 Assembly 30+ years ago as well.


can't you just use different names, algF32, algF64, really? is this the end of the world? No wander we have npm packages isEven and isOdd, sometimes you need to get yours hands dirty.


You can't legislate against poor taste. Community norms help but there's always exceptions.

It's interesting to consider Python which - generally - has strong community norms against excessive abstractions and too much magic.

There are some notable exceptions. I believe Twisted was regarded as "UnPythonic" and I've heard complaints about asyncio. Interestingly both are related to the same problem space.

I'm sure there are other examples of inscrutable pyramids of abstraction but Python also has many examples of good, readable abstractions.

Maybe it's not abstraction that's the problem but poor judgement? What makes one language sprout AbstractSingletonProxyFactoryBeans and another not? I'm not sure if correlates directly to "ability to create abstractions" - I think there's a human element.


> What makes one language sprout AbstractSingletonProxyFactoryBeans and another not?

Missing features that enable higher levels of composition and abstraction.

Such as, for example, generics.

If you look at the usual punching bag in these discussions, Spring, you will see that as Java added features, the implementations became simpler.

Java's design hypothesis was: if we take away these features, median programmers won't blow their feet off accidentally. But it turns out that to get things done you need well-above-median programmers to provide tools, APIs, frameworks, leverage and that in turn, they will need to find a way to do it. In the case of Spring that's been marked by a slow retreat from heroics into progressively simpler approaches as and when Java itself allowed for them.

Spring 5, for instance, sets Java 8 as the minimum, and they went through the whole framework refactoring, cleaning up, cutting down and so on wherever that baseline allowed them to.

Disclosure: I work for Pivotal, which sponsors Spring, though on unrelated software. But I've spent a lot of time with Spring folks. They are smart.


> What makes one language sprout AbstractSingletonProxyFactoryBeans and another not?

On this case, it's the lack or availability of alternatives. You can't take away every feature that isn't the strictest possible OOP and expect people to create expressive and simple code. Remember that those things were created by the best professionals around, and won the selection of best practices and standard open source tools.

People do not write singletons in Python because it has actual static variables. Any moderately complex Python code is full of factories, but people don't even notice because they are actually just a function declaration around the same code that would be there anyway. There is a lot of "writing too the interface" that Java people do with abstract classes, but it comes nearly automatically with duck typing. And for the "bean" part people just use dictionaries.

Go people seem to be taking the wrong lesson from those things.


> I believe Golang has two different kinds of productivity. Productivity within the project goes up quite a lot with generics. And then, everyone tends to immediately treat every problem as "let's create a mini language to solve this problem". The second type of productivity - which is getting more and more important - is across projects and codebases. And that gets thrown out of the window with generics.

This is a very valuable observation, relevant outside of Golang, which underlines the pitfalls of abstractions. Design patterns aim to standardize abstractions so that other readers of the same code can easily understand.

I used to capture the same related principles in what I called "IKEA code" - simple, elegant code that is cheap to write, easy to understand and use and not meant to last forever. I.e. code and design that is meant to be replaced. I think the concept of "local productivity vs global productivity" is very related and completes the "IKEA code" principles. Rather than suggest one is better than the other, I think it's important to understand the motivations when making a choice.


>The second type of productivity - which is getting more and more important - is across projects and codebases. And that gets thrown out of the window with generics.

Generics increase type safety (no more interface{} casts) and reduce code duplication. I don't see how this could possibly throw productivity "out the window". If anything, the effect will be the opposite. The tradeoff with generics has always been programmer time versus compile time and/or execution time. Assuming that programmers don't unnecessarily complicate things for themselves, programmer time will be significantly lessened with the introduction of generics. And there's nothing preventing programmers from burying themselves in a sea of abstraction even in a language without generics.


> One cannot have both I guess?

Not to be snarky, but...

The whole point of LISP-style macros is that you very much can have both. Ultimately, generics will compile down, either via type erasure or a generated specialization, to a "copy-pasted" function (in concept, at least). With a macro, you have ultimate control of how this is achieved.

The problem, of course, is that the people who care how generics are implemented all think their way is the best (leading to a proliferation of slightly-different-and-incompatible implementations), and everyone else just wants something that works fast and reliably.

Bad code can be written in any language. Good code can be written in most languages (yes, even C++...the jury is still out on PHP). Every feature that grants power comes at the cost of potential complexity. What ultimately determines success or failure is the community and best practices that develop around these features.

In this regard, I think Go will be "o.k." even with generics.


> But then, after seeing the examples, e.g. https://github.com/albrow/fo/tree/master/examples/, I could see the picture of what the current codebase would become after its introduced. Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning

> Compared to a half-as-useful Java codebase where I have to carry a dictionary first to learn 200 new english words for complicated abstractions and spend a month to get "assimilated", this was absolute bliss

You're making a good argument for standard generics and their machinery included as part of the standard library as opposed to bolted on.

Java incrementally baked generics in relatively late in its life along with a culture of a lot of ad hoc machinery to cover up the lack of useful lambdas.

The wisdom of Go putting these in the stdlib is you get their benefit universally without having to wade through a learning process with a uniquely flawed and ad hoc hierarchical ontology for each project.

Having such universal tooling is a sign that your language is powerful enough to lift out and abstract real problems. That's a good thing! Don't mistake Java's legacy issues for issues Golang might have. It probably won't end up with the same problems because it starts with a better feature set.


I am not quite fond of the idea of having C++/Java Style Generics (I simply don't like the syntax that comes with it), but I see that the current system is broken.

From my current perspective I think fixing some things related to Go Interfaces and Container types (Maps/Slices) should make them even more usable than they are now and making them pretty much the thing people want to use Generics for. Currently, it looks like hell to build something with the empty interface and using a lib which is built around the empty interface doesn't look nice either.

Using Go maps is sometimes awkward too as they require some weird attribute 'comparable' which some types posses and others don't. Why??? I mean why can't the Go authors use their own language construct and use an Interface like:

  interface Comparable {
    Comparable(interface{}) bool
  }
Similarly, I find it awkward that we still can't use Interface Slices[1]. I mean I understand it isn't simple to implement but certainly not impossible.

One thing on my todo list is writing an experience report with better examples. It's priority just got up-voted ;-)

[1]: https://github.com/golang/go/wiki/InterfaceSlice


Counterpoint: coming from a Java background, I find totally freely-definable interfaces for some features used by core container types to be a little worrysome.

In Java, I can't count the number of times I've debugged either correctness bugs or 'merely' horrible performance bugs based on incorrect implementations of Equals or Hashcode methods. Implementing a non-commutative Equals method is a colossal footgun, and incredibly irritating to debug, and yet somehow it happens time and again.

And there are other, subtler, arguably "correct" and yet but extremely unwise things one can when do given Java-like maps implemented on the Equals and Hashcode interface methods. For example, one can define two types T1 and T2 which both claim to be (commutatively) equal, and define some interface TX which is implemented by T1 and T2. Using a Map<TX,Object> and inserting members by T1 and T2 now has an interesting form of semi-defined behavior: if inserting "equal" keys, whichever type was used first will be persisted. Imagine doing this in some concurrent map, and imagine that T1 is 8 bytes and T2 is 400 bytes; enjoy debugging those occasional OOMs.

I've been very happy not to ever have this particular problem when defining the key types in my Go maps.


Java does equals and hashCode horribly. Granted, they did not have Java to learn from but it's what we have.

Kotlin and Scala solve the problem pretty decently. Equals and hashCode are automatically generated for data/case classes. I've never encountered a messed up equals in Scala and only very rarely felt the need to override the generated equals, and then you make a very conscious choice to do it.

C# does it even better, there is a IEquatable<T> interface that defines the `bool Equals(T other)` method. It partially solves the irritating universal equality.

If I would design Java today Object would have no methods and there would be a stdlib interface defining `equals(other: T)`, and data classes would automatically implement that interface


> If I would design Java today Object would have no methods and there would be a stdlib interface defining `equals(other: T)`, and data classes would automatically implement that interface

This polymorphic equality is the default in some functional languages, like OCaml. It's fine for simple cases, but you really do need overloadable equality. Equality as a method in OO is a problem because of the asymmetry though, and often an even bigger problem because such languages typically permit pervasive null.


There definitely are cases where that's not enough, but it's a 95-99% solution.

The problems you mentioned:

Classes could change equals to accept subclasses. Then subclasses could not alter equality themselves.

T implementing `equals(other: U)` for some U solves a lot of other use-cases, but could make equality asymetrical.

Reference equality would still exist, but would not be the default, e.g. Java's `equals` is Scala's `==`, and Java's `==` is Scala's `eq` operator (which you almost never use)



I recently had to write my own equals and hashcode functions in a Java project (a language relatively new to me) just so I could store my custom object as a key in a HashMap. The experience of doing this seemed so silly and unnecessarily low-level to me, that I almost abandoned the whole codebase and looked for another language to start over.


Well, there is another problem with my suggestion: At the moment, Maps have no problem using custom structs as they stack up the comparable attribute of the containing fields. If you would use a normal interface you would have to define how you would want to name that kind of relation (all fields of X implement interface Y so X implements Y? Sounds pretty similar to how the equal operator works in Go). Otherwise, you would have to implement Comparable for every custom type which would be in fact even worse than the current state (It is bad enough we have to do that for the sort.Interface twice per week ;-).

So I completely agree, that having to implement basic attributes like 'equals' or 'comparable' is no good idea. Instead I would favor some logic which assumes some natural situation and lets you implement a custom logic if you want to do it.

Nevertheless, I think those problems just show some of the inconsistencies Go comes with. Don't get me wrong, I love Go and I like the way the Go devs work. It is just that I am worried that we will end up with Go 2.0 adding Generics without solving the issues within the otherwise pretty nice Interface system.


Go could automatically create hidden Test methods for methods used by core types like maps. If Equals and Hashcode have to be correct, then these tests could ensure it.


> C++/Java Style Generics

C++ and Java generics are nothing alike, so there's no such thing as C++/Java-style generics.


In fact, I am not completely sure about the state of Generics in C++. AFAIK the term 'Generics' was coined in context of Java, but C++ has a similar concept they call Templates. Maybe you could elaborate on what is the exact difference between those two concepts?


Templates can do a lot more than generics can, but come with corresponding downsides.

Templates are basically a form of copy/paste style substitution with fancy bits. When you instantiate std::vector<char> in C++, the compiler creates an entirely new class with "char" substituted in everywhere the type variable appears. This class has nothing in common with std::vector<MyStruct> - there's no common superclass, the compiled code may or may not be shared depending on low level details of your toolchain and the types involved, etc. That in turn means if you want to write a function that works for any std::vector you must also use templates, and your function will also be substituted, and your final binary will also have many similar-but-different copies of the compiled code, etc. However because of how C++ templates work you can achieve some pretty astonishing things with them. In particular you're getting code compiled and tuned to the exact data type it was instantiated with, so you can go pretty fast with certain data layouts.

Java generics seem superficially similar but in fact are different. In (erased) generics, a List<Foo> and List<Bar> are exactly the same class at runtime. They're both just List and both just contain objects. The compiled code is the same, you can cast between them if you know what you're doing, etc. Likewise if you write a generic function that works for any list, there's only one copy of that function. Generics have different tradeoffs: they're less powerful (you can't write a regex library with them for instance), and they don't auto-tune themselves to particular data types ... at least not until Project Valhalla comes along ... but they're backwards compatible with pre-generic libraries at a binary level and they avoid an explosion of compiled code bloat, which is a common problem with templates.


Thanks for clarifying, sounds like pretty deep stuff though.


Can you point me to a resource with details about this? I'm relearning C++ and a comparative evaluation of a language feature like that would be very useful.


Check Tour of C++, 2nd edition is updated for C++20.


I think they were just referring to the angle-bracket syntax.


I see that the current system is broken.

Just where is the line between "tedious but workable" and "broken?"


> "Processor", "Context", "Box", "Request"

There are a lot of interfaces in the existing library that one could consider quite abstract as well. Reader? Writer? error? Another thing worth noting is that many/most C++ codebases manage to avoid the type of generic proliferation you're talking about.


I'm not sure C++ is a great example of a language that avoids generics abuse. The reason STL errors are so convoluted is largely due to the unintuitive and complicated way they use generics to support obscure features like changing the allocator used by containers.

Java is actually a much better example of where generics are used rarely and well. Containers are parameterised by exactly one thing - the type of what they contain. You don't parameterise them with different allocation strategies or comparators. Type inference means generics are often automatic, but still catch mistakes for you. And whilst nothing stops people making bad abstractions in any language, the standard library sets a pretty good example by and large.

That said, Kotlin's generics are better still and probably the best implementation I know of.


> The reason STL errors are so convoluted is largely due to the unintuitive and complicated way they use generics to support obscure features like changing the allocator used by containers

The main reasons template errors suck so much in C++ are 1) lack of a proper mechanism to constrain instantiations (hopefully addressed by Concepts) and 2) The general mess that is function overloading and automatic type conversions.

For example, the classic "template hell" error message people usually encounter when starting to use C++ is failing to properly overload operator<< (example here[1]). That error message could easily be as simple as "No `operator<<` for class C", but instead the compiler needs to inform the user of every possible overload match, every possible type conversions that could cause a match, and templates that could have matched but can't be instantiated along with a reason for the failure. Of course it doesn't help that std::ostream is absurdly abstract, but it's really just making a bad problem worse rather than the fundamental root of the problem

[1] https://godbolt.org/z/XveitP


I think it's 100% ok for a big utility library like boost or std to have complicated internals to support a variety of use cases. The go standard library likewise is big and complicated (although perhaps not quite as). My main concern is that user code can be written simply. And to be clear, I'm not holding up C++ as an example of a great language. In fact, I'm kind of looking at it as a worst-case scenario. I'm only saying that most C++ programs I've been exposed to are relatively free of creeping genericity.


> The reason STL errors are so convoluted is largely due to the unintuitive and complicated way they use generics to support obscure features like changing the allocator used by containers.

The reason STL errors are so convoluted is because the C++ template language is Turing complete.


no, it's because every function is implemented with a dozen helper function so compiler error stack traces have to show all the intermediate, generally return-only, helper functions


True if we are speaking about C++11 and earlier.

With C++14 onwards, it is a matter of developers actually making use of static_assert, constexpr if and enable_if.

Not as comfortable as just using concepts, but they get the job done.


> no, it's because every function is implemented with a dozen helper function

Which are needed and/or allowed for reuse because the template language is Turing complete. More restrictive type-level languages have better error messages because the compiler either understands what you're intending to do or you're limited in the things you can express, full stop.


I'd say generics aren't just about avoiding copy-pasting. It's also added type safety. At this point the closest thing is using an interface as a function parameter. You get a dynamically typed value (albeit limited to types that implement the interface).


That, and efficiency, as the generated code can be inlined in many cases.


The closest thing is auto-generating code, not using interface{}.


Yes, but there's still a depressingly large amount of code in the wild that just casts interface{} back and forth.


Common with ORMs. The gorm API is horrible - it's all interfaces.

The only reasonable ORM library I've seen is sqlboiler, which uses code generation.

https://github.com/volatiletech/sqlboiler


I don't necessarily mean interface{}. This:

type Thing interface { ... }

is better, but Thing is still a dynamic type.


I think it's hard to get a feel for how it would look in practice from those short examples.

Library code might be a little ugly and abstract, but as an end-developer, you usually don't need to worry about it. The code you yourself would see and write is going to have a lot more meaning and be more understandable.

I've rarely had to do anything fancy with generics in my own projects. Where I've seen it be most useful is when I want to offer a flexible public API -- in which case, the choice of writing a bit of generic library code is much less smelly than copy pasta.


> Library code might be a little ugly and abstract, but as an end-developer, you usually don't need to worry about it.

But the functions you provide for the code you're writing are a library for someone else.


> Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning.

This is a completely false dichotomy. Just because a feature can be misused doesn't mean it's not legitimately useful. Generics make the language easier for the user, but are harder for the implementors. Most language designers / implementors recognize that this trade-off is worthwhile, and therefore nearly every other popular typed language has them. The Go implementors are the only ones against doing it, because, well, they would have to implement it, and they're concerned that it will make the codebase too complex for them to maintain. (Their sympathizers are against it too, which is still a mystery to me.)


> Just because a feature can be misused doesn't mean it's not legitimately useful.

The point isn't that it can't be useful.

In the real world, the amount of code I'm forced to understand that uses overly complicated abstractions is the issue. I'm glad one language finally took the opposite direction.


> In the real world, the amount of code I'm forced to understand that uses overly complicated abstractions is the issue.

This may say more about the specific codebases you work in than it does the software world at large. I spend most of my time in a couple of languages that all have generics and don't experience the problem you describe.


There are good and bad abstractions. In my experience good ones are usually grounded in math. Used the way they are intended generics/parametric polymorphism should reduce your cognitive burden. Eg. having a single, well defined and tested abstraction like Set that is type safe and its operators can work on any type it can contain is surely a good and worthwhile abstraction. The same goes for parametrically polymorphic functions that need not know about the concrete types they operate on and with the proposed contracts they can optionally require that its argument types possess a given constraint/capability like equality.


I'd argue that Generics make the language easier for the writer, not the reader which is what he was getting at. What I like about Go is the language is small, so at the expense of some verbosity, that in most cases you write once, its very easy to understand. One thing I dislike about some Java & Scala codebases is the "manual" unwinding you have to do to debug some functionality.


This is FUD.

Parametric polymorphic code is not hard to read at all - in fact, it is easier to reason about since it is universally quantified for all types.


Most people don't even know what parametric polymorphism is, let alone have ever used a language that supports it. I find it's basically impossible to explain just how powerful of a tool it is to someone who lacks experience using it.


"append","delete","copy" in Go is parametric polymorphism hardcoded in the compiler. append can be called with many different array types. There is nothing complicated with that.

go can iterate over arrays irregardless of the element type the same way with the same construct using "range", therefore "range" is polymorphic.

The real debate here is should the compiler be "privileged" or should there be API in userland to implement these behavior.


> Today it exists in Standard ML, OCaml, F#, Ada, Haskell, Mercury, Visual Prolog, Scala, Julia, and others.

> Java, C#, Visual Basic .NET and Delphi have each introduced "generics" for parametric polymorphism.

https://en.wikipedia.org/wiki/Parametric_polymorphism


A bunch of low-reach languages have true parametric polymorphism. A bunch of common languages have polymorphism that may be parametric or may be bounded by runtime type information checks, and you can't tell the difference from the types.

Neither of those contradict my assertion that most programmers have never done work in a language where the type system can make guarantees that they can trust.


This is a very pedantic point


The only programmers who haven't used a language with parametric polymorphism are those who have only programmed in C, Go, or dynamic languages at this point.

Outside of Java syntax noise, I really have my doubts that Java's parametric polymorphism is too complicated for the average programmer to understand.

  public static <A> A identity(A a) { return a; }


Java does not have parametric polymorphism because instanceof exists in the language. That reduces what you can say about something from the type alone to "maybe parametric, maybe bounded by runtime checks that aren't visible in the type."

How many languages provide you a way to guarantee at the type level that the behavior of a function cannot be influenced by instantiation of type variables? I don't believe that to be true of anything that runs in the JVM or in .NET, for instance, as runtime type information is always available. C++ has dynamic_cast. What common language actually has parametricity expressed in the type system?


That’s true..but Haskell has bottom and Scala has bottom + instanceOf checks and JVM reflection as well. And they clearly have usable parametric polymorphism despite things being “broken.”

Saying Java doesn’t have parametric polymorphism because you can cheat is like saying Haskell isn’t pure because of unsafePerformIO.


Agreed. There is always a point where it's the developer's responsibility to write good code, even in languages where it's very difficult to shoot yourself in the foot.


> Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning.

A programmer can write Java in any language if they're willing to work hard enough at it.


Actually Java adopted those ideas from 90's C++ code style.


Complete bullshit. Generics have 0 relationship or co-dependency of any kind with the abstraction slash design patterns you mention. These are entirely possible without generics, and generics does not in any way encourage them.


There are less common design patterns particular to generics. The presence of generics does tend to encourage those. I have seen subsystems where coders used C++ templates where they could have used polymorphism instead.


... I'm sorry, that's a bold claim when a large chunk of these patterns are literally used to allow for you to be able to operate over a variety of different objects in a loosely coupled fashion


My gut feeling is a significant part of people who voted for them don't really intend to use Go that much.

They might try to please different crowds with Go 2. At the end of the day, they might just loose everyone.


Having generics isn't related to excessive abstractions. One is a language feature and the other is program architecture. It's definitely not an exclusive trade-off.

I'm sure there are plenty of Go apps with bad architecture, where a tiny dose of generics would greatly help.


Having generics isn't related to excessive abstractions.

I've seen the situation in C++ where templates led to excessive templates, simply because they were "neat." As a result, I found myself debugging code that had no source code whatsoever.


Poor design has nothing to do with generics. If anything they make API cleaner and safer to work with.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: