I think Go has succeeded, despite, not because of the language itself. One very big limitation being the lack of generics or any sort of ability to leverage higher-order types. Sometimes making a small modification to a large codebase can require a huge footprint of lines modified, just because so much code winds up duplicated in slightly different contexts.
That being said, I really hope that Go's core team understands the drivers of its popularity and doesn't compromise the operational side for the sake of language improvements. Although higher-typed languages have no trouble achieving good runtime performance, it seems like there's a fundamental tradeoff at compile time. Scala, Haskell, even Typescript have painful compile times. I don't know if there's any theoretical reason for it to hold, but more typing complexity inevitably leads to slow compile times.
And as for the topic of clear error messages, the higher-typed languages are all atrocious at this. Even templates in C++ are notorious for puking near impossible to decipher errors. This is something I'm sure can be fixed with enough engineering effort, but it would probably take a lot of effort to get there.
In general, I bitch about the Go language all the time. But I think we should recognize that the simplicity of the language gives us developers a lot of peripheral really nice usability benefits.
OCaml has always been known for having really fast compile times, and could be described as a "higher-typed language" I think.
> And as for the topic of clear error messages, the higher-typed languages are all atrocious at this.
Elm and Rust have made good progress here and have some of the most readable and useful errors out here, especially for beginners.
Also adding nullables after the fact isn't very hard, other languages such as dart & objective-c have done it. The migration path isn't that hard either, since it is a completely new syntax that doesn't have to be a breaking change, so migrations can be gradual. Out of all the features that the new higher typed languages give, I think nullables are probably the best one out of them, and one of the cheapest for all the others to adopt. And while your adding nullables, you might as well add sum types because that is how you implement it anyway.
"Even"? I actually miss the macro-like power of templates when I'm using generics in Java and C#, but generating the longest possible error message using templates is practically an Olympic sport. I presume that SFINAE is partially to blame, because a lot of the output enumerates all the candidates that didn't match.
To me it's pretty interesting how the three big static OO languages have implemented generics: super flexible permissive macros in C++ (with murderous error messages), type erasure in Java (gross) and actual generics in the runtime (C#). For everyday usage it doesn't matter much but I always like thinking about what's going on under the hood.
Haskell is arguably great with types, and it does type erasure as well.
Indeed, this is part of the reason the generics have been so long in coming to Go; you can find discussions about how to implement them as far back as 2009. One of the reasons they aren't using `<T>` to indicate them is specifically because it drastically changes the speed of parsing (since you would need to figure out whether that `<` is a "less than" operator or a syntax for generics).
I can't find the reference right now, but there was a quote in one of the early discussions that was something like:
The Generic Paradox is this: You can have slow programmers (no generics; programmers have to do their own code duplication), slow executables (where you do loads of unboxing), or slow compile times.
EDIT: Found the reference; the exact quote is: "The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?" 
The Golang generics proposal was specifically designed to try to reduce the "slow programmers" effect while avoiding either of the other two effects as much as possible.
Can’t speak to the first two, but in some places TS has pathological compile times when you’re trying to give more information to the compiler. A recent example is trying to appease type inference on a set of generics where a wrapped function couldn’t narrow types from its wrapper even though they were totally valid but mostly inferred. I wrote a type guard which passed the same values with explicitly narrowed types and... it just never completes. Pegs all my CPU cores in VSCode. Used throughout my project and effectively kills type checking.
I know enough about the underlying function I’m calling that I just cast to any, but now I have a mine waiting to be triggered.
To be fair, this is in frontend land where it’s not TS’s fault that the types they support are so bonkers to begin with. But holy hell it’s not always obvious what’s gonna cause compile times to skyrocket.
I would not call 2x slower than C high performance. Decent performance, yes. But certainly not high.
It has easy to use concurrency, so it waits less than typical C code. This might lead to some high perf claims. But it is not
When good performance is needed we don't even "write code" anymore, we use ASICs. BTC mining, network switching/routing and encryption comes to mind.
In some years a language that interops with Go comes out, where all the Go types have a ?-suffix indicating they are nullable. The language will be mostly null-safe. Also it will sport sumtypes and pattern matching/ destructuring in switch statements.
It will be called: Gotlin.
But this is not easily reversed. Not as easy as tagging generics onto the language. And the parallels with Java's maturing are just lovely. Also, we have seen what Kotlin is now doing for Java: a new language was needed to truly fix that one mistake (implicit nullability), introduced in the grandparent of Java (namely C), and the great old grand uncle of Go.
Yes. They are as important as records/structs a.k.a. product types.
> F#, OCaml, Reason, and Elixir
Haskell, Elm, PureScript, Kotlin, Idris, ...
> I know they add a significant amount of complexity to the language
Really? I dont think it can be so much more than generics :)
Elm has 'm and the whole language is 5k lines of code.
Generics... Well, I just come from a Elm gig (knowing Haskell/C++/Java/Kotlin/Ruby) and I must say I was not too bothered with the lack of generics there.
Elm has generics.
No. That would be fixed by having something akin to typeclasses, traits or interfaces.
Elm's List, Maybe and Array are all defined using a generic type parameter.
BTW, one doesn't have to chose between them in Go either, but only because there are under-the-hood generics for the native types, that are only accessible to the language designers.
The fact is when you're working with any data coming from any other system, the data is or will become null, somehow, some way, and your program code which treats this as impossible is just literally wrong in a way that is complete jibberish. Additionally, programmers don't want to pass huge lists of parameters to every function, but instead bundle things into structs to be easily passed around, however this model makes it impossible to treat a value as Optional at an early part of the callstack and Non-optional later in the callstack after it's been checked and verified. So you either pass everything as a separate parameter, copy things into different structs all over the place, or just make the value Optional everywhere, deleting the usefulness of making Optionals.... Optional. Just let it be null everywhere, and if it's null somewhere it shouldn't be, the program throws an error--like it should, because there's an error.
That's fine though. The program that treats it as possible but fails to handle it in every possible spot is wrong too, but wrong in a completely unpredictable place, way and even number of times.
It's freeing to start out assuming "title" can never be null. If it is, you get a quick failure at the very edge of your application where the data comes in. You make it nullable, and the compiler tells you all the places where you made the wrong assumption and now need a fix. Or even better yet, you give it a default value, even an empty string, and nothing more is required.
Sanitizing input is a very common pattern and I don't see why this option is such a deal breaker. And it would normally just be needed at the system boundary, rather than "all over the place".
Or, you share types between different services using something like grpc or thrift, and then you really can trust that the values coming in are not going to be null.
I definitely don't agree that settling for implicit null is the best option.
i don't really understand this part. you can't treat it as impossible, because `Optional[T]` is a different type than `T`. anywhere you try to use an `Optional[T]` as a `T` is a type error.
i think you're saying "there are some cases where `T` can also be null that are now unaccounted for," which i don't think is really relevant. that's more a matter of API design; a hypothetical function that receives data from a server should return an `Optional[T]`. one that treats the operation as infallible would throw an exception just like in an implicitly null language, right?
> The fact is when you're working with any data coming from any other system, the data is or will become null, somehow, some way, and your program code which treats this as impossible is just literally wrong in a way that is complete jibberish.
Your program and the other program has to agree on some protocol. That can have optionality built into it if you expect them to not send certain fields. If that protocol is violated then either you made a mistake (protocol was not accurate) or they did (sent poorly formatted data); either way, you can handle the error without nulls.
> however this model makes it impossible to treat a value as Optional at an early part of the callstack and Non-optional later in the callstack after it's been checked and verified.
I think what’s being described here could be easily encoded as variants/sum types, or you could have functions that give optional fields default values. Without a specific example it’s hard to discuss what you’re really trying to say.
Optionals everywhere makes that explicit, vs. something implicit that you can forget.
If you mean `unknown`, that's a TypeScript thing.
And personally I love working on a code-base and with coders that differentiate between null and unknown. But there's no denying it's a footgun and many get no value besides bugs from these two. It's kind of similar to how JS gets boolean comparisons weird, which is worked-around by convention
I think they are very similar in that there are now 2 nully things to guard for.
Option types suck in Java as they were tagged on later. Maybe in Haskell, or even nicer, Maybe in Elm is were the party's at. It is build in at the center of the language. And getting a value (v) from a list by an index should give you a "Maybe v", as you could be out of bounds. That's the strong typing that basically keeps me safe when viciously attacking a code base I have not touched for some time (or is not written by me to begin with).
Adding optional arguments everywhere is a bit like declaring checked exceptions everywhere. It's true that optionals make the code more explicit, and they force developers to handle every situation, but in the end you still need developers who actually care about writing readable and maintainable code for it to be a net gain.
Something I have come across a fair bit is code where too many values are declared as optional, so the developers try to build an elegant solution by mapping functions over these optional values, returning a default at the end. If there is a value, then the expected code path gets called, else it just silently skips the rest of the code and returns back up to the top layer. This causes a similar issue to what can happen with promises - you lose the context of where the first "error" happened. This could be avoided by eagerly throwing an exception in the first absent case, but that takes us back to where we started with null.
I think the "best practice" way of using optional values is to get buy-in from all the developers on the project that basically nothing should be optional, ever. The goal, then, is to explicitly throw an exception (or return an error) at the very top level and ensure that only actual values enter into the main part of the project's code. But enforcing that is more of a management problem than a technical problem.
No, it doesn't; there's a couple easy, obvious ways to do that depending on how you get the data components and how far it is useful to pass them as a group without ensuring none are null. It's not even mildly challenging, much less impossible.
Go doesn't have implicit nullability. You have to declare that something is a pointer for it to be nullable.
There are a few kinds of pointers: "normal" pointers, slices, maps, function pointers, and interfaces. Any of those pointer types are nilable. Regular structs and primitives are not nilable.
var x someStruct = nil //this will not compile
var x int = nil //this also won't compile
Sorry, but to me that's a clear symptom of implicit nullability.
Edit: clarification, to me it does not show in the type signature that something may return a null, also I do not have to unpack a possible null value (like with Maybe or Option).
Your definition of "implicit nullability" is suspect here.
Go doesn't even implicitly de-reference things, which is what I think you're intending to say that it does. (EDIT: field access does cause a dereference, as was pointed out in a response to this comment. I still think of this as an explicit action when you're specifically operating on a pointer variable, but I'll concede this point here.)
Your position on this is really confusing at the moment. Go handles nullability in an extremely different way from Java.
Edit for your edit:
> clarification, to me it does not show in the type signature that something may return a null
The best way to know something won't be null is to use value types where possible. Don't return nilable values unless you have a good reason. When you return nilable values, only return nil when the error value isn't nil. This is a common pattern in Go code. Why would you return nil if there wasn't an error? If there was an error, why would you care what else was returned, outside of some very exceptional circumstances?
> also I do not have to unpack a possible null value (like with Maybe or Option).
I write Rust and Go professionally... I'm familiar with the pros and cons here, and I wish that Go would adopt Sum Types, but this has nothing to do with "implicit nullability", and in practice... it's really not a big deal. I've written huge amounts of Go code, and I just don't remember encountering nil exceptions in production except once in a blue moon -- and it is similarly common in Rust to hit an "expect" or "unwrap" that you thought was statically impossible to reach. Go has good editors (like GoLand) and linters (like golangci-lint) that make it easy to avoid most of the practical pitfalls.
And then make a Maybe sum type? But then it is too late as the std lib communicates with nils instead of Maybes
> The best way to know something won't be null is to use value types where possible. Don't return nilable values unless you have a good reason.
I'm not doing this, I'm using an API that does! I cannot choose what some code returns, all I can do is --sigh-- add another null guard.
If it is my discipline to add null guards or have runtime explosions I consider the nulls to be implicit.
That you find as much runtime problems with unwrap/expect in Rust (which have a lot of red tape on them in API docs) as with "missing null guards" in Go, is great for you, but to me that does not make it a good design choice in Go.
But they're still explicit nulls.
You seem to be saying that only implicit nulls require null guards, but that's just not what that means (as far as I have learned).
You seem to hate nullability as a concept, which is fine, but that's different from hating implicit nullability.
> That you find as much runtime problems with unwrap/expect in Rust (which have a lot of red tape on them in API docs) as with "missing null guards" in Go, is great for you, but to me that does not make it a good design choice in Go.
It's also a comment on how NPEs are just not a common hazard in Go compared to my past experiences with Java, especially when combined with good editors and linters.
Except that there may be other reasons to use pointers than just nullability. And in that case, there's no mechanism to specify whether or not the thing being pointed to is nullable.
I think that's where the implicitness in question comes in.
But I might well be mistaken.
Except when accessing a field via a pointer to struct.
It is just a bad contract, as it is often misunderstood, and it allow for runtime errors that could easily have been caught at compile time.
I agree that non-nullable references are wonderful, and Go will probably have to grapple with that eventually. I just don't want people to get the wrong idea. Go has value types -- not everything is a reference that is implicitly nullable.
C isn't implicitly nullable, you are correct.
I agree that Go should consider adopting non-nullable references.
That being said, implicit nullability leads to mostly human type of errors and comes from our limitation to fit all the parts of a complex system in our brains. I'm curious if there are example where machines write code and use plenty of implicit nullability without that causing any NPE issues ever.
Just a thought.
Well Haskell compiles to binary, or Elm compiles to JS. The resulting binary/JS is basically machine written and does not throw any NPE's.
It's certainly common, but of the things which would be useful to be able to constrain only to allow explicit and non-default nillability, pointers are pretty high on the list.
The same thing as is done for any other non-nullable value in static type systems.
Also, how fast will this language build large projects?
> Also, how fast will this language build large projects?
I wonder what the implementation of generics will do to Go's otherwise stellar compile times. A code base heavily using generics can easily be double the compile time. Not sure how this will be sold, probably "but then simply do not use it!"
Flesh to me. (vegan) :)
Special props to Ian Lance Taylor and Robert Griesemer for their continued revisions of drafts, and exemplary discussion with the community in implementing feedback.
Lack of enums - not big of a deal. Never had any issues with defining an enum-like type, which is a common idiom in Go.
Modules - it has been fixed. Are there any issues left?
Too big executatbles - I mean, ok, so what? Why is it a concern?
Lack of optimizing backend - could you elaborate on that?
Middle class GC implementation - are you referring to lack of manual tuning?
A few weeks ago: https://news.ycombinator.com/item?id=25750582
Personally, the proposal I was most excited about was to make ints be arbitrary precision by default. As someone who does a lot of math, this would have made Go much easier for me to use. Sadly, this proposal was scrapped a while back.
I'm not sure "C didn't have it" is a good litmus test for determining the value of features... C is called a portable assembler for a reason.
> Yes, they are convenient, but they also add a lot of complexity to the language and toolchain.
Conversely, proponents of generics would argue that not having them creates complexity for software developers who have to come up with alternative design patterns where generics would be a better fit.
> Personally, the proposal I was most excited about was to make ints be arbitrary precision by default.
Why would you want that? C doesn't have it... ;)
What I always found hard to swallow is that Go's build in functions sometimes did have generics. Just you as a library/application writer were not allowed to create APIs with generics yourself.
If generics were unnecessary, why did the stdlib/language need them? Couldn't they just have lived with, e.g., make_map, make_slice, make_chan, etc?
But then Go is not a public place: it's private domain of Google.
I decided long time ago I dont want to learn a private tool without being paid.
Anyway, yeah, no argument there at all.
I'll be honest, as an outsider that's somewhat familiar with Pike's obsession with simplicity in Go, I'm actually a bit surprised this is getting in. It does seem like the kind of high-complexity feature that was deliberately excluded from the language as part of its overall design ethos.
But it really comes down to a weighing of pros and cons. Generics have the potential (though definitely not the guarantee!) to trade off developer complexity for compiler and toolchain complexity, and the preference on that choice is a personal one.
It can be a good test.
One reason why C is so ubiquitous is the simple binary interface. This makes it easy to reuse code since the libraries can be imported by every other language out there. C code is relatively simple and the compiled objects follow simple binary interfaces.
Adding features to languages almost always increases the complexity of these binary interfaces. Eventually they become so complicated that nothing will ever interoperate with software written in these languages. The increased language complexity reduces the reusability of software produced in that language.
C++ had this problem and Rust is following in its footsteps. Rust libraries are really only reused within the Rust ecosystem. Those that are meant to be universally reusable will no doubt offer a simple ABI that lacks all the benefits of the Rust language.
It might not be used much in the stdlib (which is relatively small!), but libraries and applications rely on void * heavily in my experience.
Personally I'm a huge fan of generics but I can understand how the keepers of Go might be reluctant to go down the path of C++, Java and C#.
And many C programs just emulate them using crazy macros, that are harder to write and are less typesafe than proper generics.
The FAQ is still accurate.
"The time has come to change Go, given what we have learned over the past decade of using it in production. -rob"
the reason this proposal was accepted is because rob and others liked this plan and said over a decade of production taught them it is an important thing to have, in fact rob asked wadler and company to help get things right for go.. so i would say you'd bet wrong
I do not remember him being against generics. He is the one who contacted his former colleague Phil Wadler to help with the theoretical validation of the type parameters proposal. In his talk about the Go 2 draft specifications (back then the proposal was still based on contracts) he was positive that a good design can be found.
Rob Pike has nothing against generics and never expressed criticism of them. His concern was always fairly practical; among the numerous forms of generics that exist among different languages, and the significant variation and rapid evolution of them, how can such a feature be safely added to the language?
Apparently he's nervous but can think of useful applications of generics:
Robert Griesemer was a coauthor on the Featherweight Go paper so I'm assuming he supports it. I would assume Ken Thompson doesn't care for generics but I think he's retired so he wouldn't vote on it anyway.
The bottom line is that generics allows the production of safer, more robust, more evolvable, more documented, more performant, code.
Along with static typing, generics are simply a feature that no language created recently should be without.
I can read generic code in multiple languages even when I'm not fluent in these languages.
Once you get used to generic code, it actually becomes easier to read and understand than code where everything is type cast or Object typed all over the place.
Simple generic code is fine, but generics can be abused to over engineer things.
IMO Golang needed sum types and match statements, but it didn't need generics.
Personally I don’t think their time spent considering has resulting in anything better than if they had rushed into a solution (I’m not a fan of this proposal). But that’s just my personal opinion.
But it's just taken that long to come up with a good design. Most of the work has been pushed forward by core team member Ian Lance Taylor who put forward the case in https://github.com/golang/proposal/blob/master/design/15292-...
This abstract proposal has been updated to link to a summary of concrete proposals at the footer, ""presented for historic reference. All are flawed in various ways""
Type functions (June 2010) - https://github.com/golang/proposal/blob/master/design/15292/...
Generalized types (March 2011) - https://github.com/golang/proposal/blob/master/design/15292/...
Generalized types (October 2013) - https://github.com/golang/proposal/blob/master/design/15292/...
Type parameters (December 2013) - https://github.com/golang/proposal/blob/master/design/15292/...
But it wasn't updated with implementation proposals after 2013, most notably
Contracts (2019) - https://github.com/golang/proposal/blob/master/design/go2dra...
The general consensus seems to be that powerful type systems are very effective.
Personally, the low footprint runtime and concurrency primitives are enough for me and I wouldn't mind the language becoming "less simple" if it helps the ecosystem.
Once generics are implemented, I can imagine people requesting for the next "missing" thing.
I hope it doesn't pick them up like Java did.
When Java was considering generics, there were two major proposals out there. Sun decided on easily the worst one: type erasure. Now we're stuck with it.
When Java was considering closures, there were two major proposals out there that I recall [one being to get rid of Java's broken local variable closure semantics]. Sun (Oracle? forget) again picked the worst of the two proposals. Now we're stuck with a real monstrosity.
Java has an amazing history of picking the wrong way to do things and permanently saddling developers with it.
More details: https://www.beust.com/weblog/erasure-vs-reification/
(Perhaps I should clarify that I don't have a strong opinion on type erasure.)
Though using type erasure may have made the JVM a better target for other languages.
The developers behind Go have a really strong culture of taking a ton of time to implement any major language changes; very reminiscent of Java and C++. Talk around Generics began, well, when the language was first created, but even more seriously like five years ago, and it'll probably be another year before it hits production.
My favorite feature of Go is its characteristic of not carbon-dating codebases. Go written a decade ago looks almost the same as Go written today; Contexts would be the single major pseudo-language-level feature added in that interim which may give away newer code. Adding new features is still important, balance in all things etc, and code written after generics will give another epoch of carbon dating. Go strikes this balance in a way that should be a model for every other language.
Perhaps. But there is (currently) no other feature that people have been whining about nearly as much as they have been whining about generics. So I think it's going to be a while before another need is felt to the same degree.
They both have garbage collection, they both perform about 3x slower than static C and they're both mostly used for network services, which is the same as Java at google.
Also, 3x slower than C is a baseless claim for both; at least for Java, for comparable, big code sizes, Java will easily win, because mallocs are expensive, and arenas, pools whatevers are poor man’s GCs, with worse performance, and JIT compilation can do some really aggressive optimizations, as well as heap compression, etc.
It will make it so much easier to enable typed port objects, which can still re-use all the handy functionality for connecting inports/outports, traversing the dataflow graph, etc etc.
a, b = w < x, y > (z)
I detest how syntactically overloaded our few ASCII symbols are, and using square brackets for templates seems wrong.
Edit: Even better, use gofmt to convert each ASCII symbol into several unique Unicode symbols depending on usage, to show the semantic meaning. So  as array is represented differently from  as the template operator.
Edit 2: I suspect this one decision will prevent some developers from investigating golang and thus affecting uptake - choice of syntax matters.
In the case of Go and the example you’ve given, if ‘W’ were a function then the code should read like this:
a, b = w() < x, y > (z)
(w<x,y>)(z) // w is a generic function
(w<x),(y>(z)) // two comparisons
D, Modula-3 and Ada use parenthesis.
ML based languages use quoted letters.
There is no everyone else.
I have gone into more detail about why I don't like the square brackets notation for generics to one of the other replies so I wont reiterate myself here and simply suggest you read the replies before commenting.
The reason I prefer angle brackets is just because I think it’s a little more readable. For me square brackets and parentheses look too similar in long function declarations. This might be a symptom of my dyslexia but the fact remains it’s a real readability issue for me.
I actually wouldn’t have minded if they used another visually distinct character either. Like the << / >> characters that were also proposed (even though non-ascii characters have usability drawbacks when typing code).
"We don't need exceptions"
"We don't need ORMs"
That's great, until you realize you have an actual product to build.
And no ORM != manual mapping of dates. All kinds of drivers have custom adaptors for data types.
(Also a query builder is not an ORM).
>That's great, until you realize you have an actual product to build.
That's exactly the problem with ORMs. You get worse SQL generated under the scenes, with worse performance, and less control.
It's just hidden under the carpet.
Plus, if you build your product with heavy domain objects in 2000s OO-style you're doing it wrong.
And if you don't, and use, e.g. data classes and record structures, then you don't need an ORM since there's no "O" to map too.
Also, talking about such a low level performance issues makes me think we're already talking about very different problems. Maybe you're working for Google or Amazon or a pretty performance heavy application. Then ok, I agree with you.
But my disconfort is with most startups, and most average companies chosing Go and doing all of this reinvention when what they are doing is basic CRUD for a web app that has less than 100 reqs a day.
For the use cases I've seen of Go so far, the bottleneck was caused by using it, and it's ecosystem, because the slowest part of the system is the development time and the need to rebuild from scratch a lot of things you get for free otherwise.
Yes, because that's what you want or need, your database drivers to implement custom adapters for data types.
What do you do when your app supports multiple database types? What do you do when you have to switch driver for whatever reason?
ORM/user friendly query builders are a layer on top of the driver, the driver shouldn't contain them...
So what's the timeline for proper exceptions now?