Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Why Go Is Not Good (2014) (yager.io)
413 points by kushti on Dec 9, 2015 | hide | past | web | favorite | 453 comments



https://news.ycombinator.com/item?id=7962345 << 527 days ago, 356 comments


Thanks for the link. I'm kinda happy this resurfaced because I missed it on it's first go around (pun intended) and I am happy I read the article. Its nice to see some of these language features that I want to see in my language as well.


While many of these points (on generics especially) are completely legitimate, this article will fall on deaf ears. My impression of the Go community (both within Google and outside of it) is that there is a very ... moralistic? .... sense of "You don't really need that, we know best" going on.

It aims to be a pragmatic language. But IMHO the anemic nature of the type system is a practical handicap that they have made a point of pride out of not addressing. It leads to boilerplate code and fragile code.

I am no academic language snob -- I like Rust, and have been known to like me some C++ templating, sure, but I can understand a critique of complicated type systems that laypeople cannot understand. But after my brief exposure to Go, I was very very frustrated. I don't think it really solves the problems it says it's solves.


Any critique of Go seems to be met with angry pitchforks in this place.

As you say, the Go developers seem to have developed a kind of bunker mentality where they interpret legitimate criticisms of the language design as personal attacks, and respond by wearing Go's shortcomings as a badge of honour.

It's not, I think, entirely healthy.


I think everyone who uses Go for anything is pretty clear about why they like it, and one of the major reasons is simplicity. So why are people then surprised when criticisms over lack of features fall on deaf ears? And to be honest, this whole argument about pitchforks seems like a straw man. If anything, it's currently fashionable to dump on Go at every opportunity. Hell, it's fashionable to dump on everything around here at every opportunity. It's a way to put yourself above something with the least amount of effort.


My (inexpert) opinion is that the kind of computer language researcher or practitioner interested in exploring how far you can go with type systems is dissatisfied with Go.

But there are many types of static analysis which accomplish similar (or more dramatic) goals than what can be down with type analysis, and the simplicity of a language makes those kinds of analyses more reachable. Examples: gofmt and gofix.

I think it's amazing what kinds of magic can be encapsulated with the type system, but my experience doing software is that complex types often end up being a hairball which is very change-resistant. Go's emphasis on lifecycle support for large programs may point us to new kinds of tools and methods. Whether those tools and methods end up being able to be encompassed by type theory is an open question, but it looks to me like Go is aimed in a great direction to raise the questions.


What part of types is change resistant?


To me, a lot of the more interesting areas of type research basically involve embedding logic into types. So types for sequencing of operations, types that constrain the data in them, etc. Those are awesome in that they enable static analysis to detect more sorts of logic errors in a digestible way, but my impression is that they end up encoding constraints in the type system that are then harder to change as the demands on the system evolve.


Agreed. I foresee a great future for Rust, and also a great future for HN posts in the form: "I know I'll get downvoted for this, but Rust is a terrible language because it lacks the following features..."


Not likely.

Rust, while lacking in various aspects as any v.1 release would, already has means of extension. A library can mostly replace a lacking feature while playing nice with the rest of the ecosystem by use of the type system and the macro system.

Roughly the same kind of extensibility is possible in Python (some later language features first appeared as approximations in third-party libraries) and even Java (where a kind of code post-processing is possible via annotations).

Unfortunately, it's unreasonably hard in Go. All you got so far is un-hygienic macros, slightly better than C #defines.

This is sad because a number of other ideas in Go are right and well-implemented.


While Rust in it's own little world is promising, it's issue is that it tries to solve problems C++ programmers were facing 10 years ago, but 'good practices' and C++ devs limiting themselves to a subset of features "solved" most of these problems for most C++ developers. And that's Rust's main target audience.

Go's target audience are 2 groups: people previously writing stuff in C/C++ because they didn't have much choice unless they wanted to bring a shitton of dependencies, but in reality didn't want the complexity this brings, and people coming from scripting languages like Python and Ruby. And for these things, go is pretty damn good. It has it's downsides - like any language, but it works. It's strongest point however is it's standard library with 'modern features' and transparency. For me it was the first language where diving into the source code of libraries - even the stdlib - was so effortless and has become a completely normal thing to do. In C/C++, the most you do is dive into the headers, and in the latter case, this is not always a good idea if you want to keep your sanity (hello Boost).

Rust could gain traction the moment it finds a market, and few high profile projects written in Rust that are widely used. But right now, I'm not aware of any.


> For me it was the first language where diving into the source code of libraries - even the stdlib - was so effortless and has become a completely normal thing to do.

A great point. It is amazing how many little things Go and its ecosystem provides, that other languages have missed for years: go fmt. linked documentation. play.golang.org, and more. The language may look like it comes from the 80s, but the tooling is avant-garde.


Servo is one, but I believe that Rust needs another high profile project targeted to the embedded or low level system world to gain mind share.

It has to demonstrate that

Rust > ( C/C++ + Static/dynamic analysis )

in terms of safety and productivity.


In the medium to long run, I suspect criticism of Rust will be more along the lines of criticism of C++ – too ambitious, too many features, too hard to do simple things, too easy to do unsafe low level things – rather than along the lines of "lacks these features".


No, I've talked to Go people and they really do respond that way. Maybe my opinion is just one data point, but this entire thread seems to agree...


> I think everyone who uses Go for anything is pretty clear about why they like it, and one of the major reasons is simplicity.

Nope. I like Go because:

* Interfaces

* A sane, fast, build system

* Concise syntax, mostly

* Garbage collection

* Excellent concurrency support

* A (for the most part) well-designed standard library

* Optional semicolons

* Compile-time type-checking

* Static binaries

I like Go because of the features it has, not because of the features it doesn't have.

On balance, although Go sucks, it sucks less than any other language for the sorts of problems I use it for.

> And to be honest, this whole argument about pitchforks seems like a straw man

If you don't notice that criticism of Go is immediately and vigorously argued against... Well, you can't be following the comments very closely.


> If you don't notice that criticism of Go is immediately and vigorously argued against... Well, you can't be following the comments very closely.

Huh? Have you ever argued against anything on the internet and not been countered immediately? Do you think if I'd publish criticism like this about, say, emacs, haskell, firefox, twitter or puppies, I wouldn't get comments immediately telling me that I am wrong and fundamentally misunderstanding what an editor, programming language, browser, social networking platform or adorable animal photo is all about?


I'm confused. Your tone suggests you're disagreeing with me, but the words mean the opposite.

Do you think I said that "only criticism of Go is argued against"? Because I didn't.


I think some of that stems from the fact that the arguments against Go are often over emphasised matters of personal preference, or just so frequently raised that it becomes tiresome to read.

I love Go. I know it's an imperfect language and because of that I do often hate specific Go idioms. So I definitely don't have a "bunker mentality" when it comes to Go; nor any other language. But in terms of "getting stuff done" Go has generally served me - personally - better than any other language. I just get a little sick of hearing about how Go is a "bad language" when what people actually mean is "it's not productive for them personally."

I can program in over a dozen languages, so I do have extensive experience outside of Go. And as someone who is language agnostic it never ceases to amuse and irritate me just how zealous people get when trying to prove personal preference as scientific fact.


In the end there is no one true best language out there. Go is great for a subset of problems and making it better at other things is a balancing act.

The most popular languages are generally accidents of history. Unix gave us C, browsers gave us JavaScript, and Databases gave us SQL, etc.


> In the end there is no one true best language out there. Go is great for a subset of problems and making it better at other things is a balancing act.

This is the problem though - I think many people expect there to be a "one language to rule them all". Personally I like having lots of different languages that excel at some problems even if that means they fall short at other problems. But I think some people either want to specialise in a specific language, or spend so much time looking for perfection that they miss subtle beauties amongst a forest of flaws.


I think the language that wanted "to rule them all" was C++, and we all know how it ended. C++ can be procedural, oop, functional, you have metaprogramming, generics, everything. Everything. C++ is everything. Every pattern, every design philosophy can be implemented in C++. How much time does it take to compile? How many developers do know every C++ feature and pattern?


It's OK to see a language that has certain strengths but also some weaknesses due to the nature of the strengths.

It's sad to see a language that has certain strengths but also some painful weaknesses for no good reason except that it's v.1. So far the best I've heard is something like "We understand there's a problem for certain users, and we don't rule out adding generic types some time later, but it's hard and our priorities are different". It is indeed not easy to do well (though Java has somehow managed to find a satisfactory solution).

Well, I'll wait a few more years.


I don't really see how Go is, or should be, that limited as a factor of design (in contrast with something like Erlang). It's just that everyone in the ecosystem is focused on on the same things and when people with other use cases, that could benefit from the properties of the language, try to make themselves known it's all "works for me".

Edit: Thanks for proving my point everyone.

Edit2: To be slightly less snarky, despite the article I don't see any reason why Go couldn't be a fit for e.g. an oscilloscope which today is running a complete, often multi-core, linux system with 100k+ lines of code (with help from an ADC and FPGA etc. of course). If it wasn't for the fact that few people would undertake such an effort when that would be akin to swimming upstream against the ecosystem.


By this measure, isn't C worse?


Even with C, you can do some type safeish data structures using macros.


I think that your view, while true, misses the point (or doesn't make it explicit enough): tools are what really matter, languages are simply less important and ultimately interchangeable.


I'm not understanding SQL as an accident of history, can you explain?


The ideas behind SQL started with Codd's relational algebra, which has some very large differences from SQL. See https://en.wikipedia.org/wiki/Relational_algebra for more. Query languages were developed based on that.

However as https://en.wikipedia.org/wiki/SQL documents, a team at IBM implemented something called System R (R for Relational) with a query language called SEQUEL that was renamed to SQL for trademark reasons. Then Relational Software (now Oracle) implemented a database that could run on non-IBM software, and made their query language mostly compatible with IBM's so people could port to their database. And everyone who came after has made their implementations compatible for the same market reason that Oracle originally did.

Even today there are people who want to return to some of the ideas that Codd had which SQL does not implement. But there is such momentum around SQL that it is unlikely to ever happen.


SQL was the query language of the first major commercially successful RDBMSs (from IBM, who developed the language, and Oracle) -- which weren't the first two RDBMSs, and were arguably successful for reasons largely unrelated to the query language chosen -- and became the de facto standard because of that. So, yeah, I can see the "historical accident" there.


The problem is that there is no empirical evidence to support any of the claims. Even looking at generics, the limited studies I've see show that generics make people a little more productive when using a generic library, but far less productive when trying to write a generic library.

In short, these are entirely anecdotal and subjective points of view, so after the 1000th person says, "you are stupid, generics are amazing because .... my anecdotes" well eventually you tune it out.

Last, there are a ton of languages out there that have generics, richer type systems, etc. Go is trying something different, and given the lack of real empirical data, lots of experimentation is the best thing. Let Go do it's thing, and let the other language do theirs, why demand that all languages need to make the same trade-off on these topics?


I agree with a lot of this, but where things get muddy with empirical evidence, is that it implies a certain "default". In this case that can either be something like "generics are useful" or something like "not having generics is useful". I don't think either hypothesis is supported by much of the sort of empirical evidence you're looking for. Basically, I share your sense that this is all anecdotal and subjective, but in reverse: after the 1000th person says, "generics are too complex and not having them is better because … my anecdotes", I eventually tune it out.

Experimentation is definitely the right way to go (har!), but that doesn't imply that people should be mum about the results of their personal experiments! Obviously of these are actual scientific experiments, but when people (like the author of the OP) say "I've used Go and here is what I think", they are in essence reporting the findings of an "experiment" with the language.

I don't think anybody is demanding that Go make the same trade-off as other languages, they're just documenting their thoughts on the affect of the various trade-offs.


At least in the Go community, it doesn't seem like people are ever saying generics as a concept are bad, but that there are very real tradeoffs involved in adding them to Go, and they're the kinds of tradeoffs the language designers and maintainers have decided they don't want to make.

I really like how simple Go is, but I also think generics are super useful, and that if Go could implement them in a Go-y way, I would be incredibly excited, and probably use it even more than I currently do. But I'd rather see the language focused on doing what it currently does well, and making sure it keeps doing things well, over seeing it try to throw in a poorly designed generics system like Java.


I'm not sure that I've heard the argument that generics as a concept are bad, per se, but I believe I've heard the argument that they aren't very useful. I (again, non-empirically) disagree with that, but I don't know if it is a common belief in the Go community, and I don't disagree with you at all that generics wouldn't fit very well into Go's philosophy and design and probably don't belong in the language. I think that's a shame, but I still like Go a lot.

On the other hand, I do think the author is spot on about nils and multiple return types for error handling. Returning a container type that can represent either a successful or error value is simple (arguably, simpler!) and less error prone. It is something I would love to see in the language. (Somewhat ironically, if it were possible to make generic container types, this wouldn't need to be done in the language itself, but could be a library.)


In practice I'm not sure I see where a PotentiallyErorredResponse object that wraps a response and error together is different from returning a response and an error.

The advantage of Optional over returning a value or null is partially that it makes the programmer more aware of the fact that they're dealing with something that might be null. The same thing would be true of an Errorable wrapper.

I'd argue that Go makes it more explicit by having compile-time errors when you don't deal with potential error responses.


The solutions are definitely similar, but the advantage is eliminating the potential for propagation of nil values.

> I'd argue that Go makes it more explicit by having compile-time errors when you don't deal with potential error responses.

Showing my ignorance: how does this work? Does the Go compiler check that you check the second return value from a method before doing something with the first return value? I thought you could just ignore the error return and pass along the returned value (which might be nil) as you please. If there's compiler support for avoiding nil propagation, that's great!

In the case of a container type like I'm suggesting, you get the compile-time checking from the type system – an error return value is not of the same type as the underlying type, so you have to explicitly get the underlying value out. Even if you accidentally propagate the container, you probably have more information than if you propagate a naked nil, because the error information is part of the container that was propagated, rather than a separate value that might be lost.


>If there's compiler support for avoiding nil propagation, that's great!

Yes, would be great, but there is no such compiler support. You can ignore errors completely. Maybe you could argue that compiler errors for unused variables are a very weak sort of compiler break on ignoring errors.


One thing I liked about the Option type in java is that it not only offered better clarity, it also came with a few cool methods, like ifPresent()


There is no rigor to this blog post. He didn't have two teams build the same project with and without generics or anything like that.

The title is "why go is not good" which is drawing a conclusion based on an anecdote. It's the equivalent of walking outside in December, stating that it's cold, then drawing the conclusion that the globe isn't warming.


As far as I can tell, conclusions are based on facts about the language as well as other languages.

Maps, slices and channels are generics. I'd like to see code in Go written without maps, slices or generics.

Go will never implement immutable data structures. Its impossible to write an immutable data structure library without generics.

Go will never implement Futures / Tasks / Observables. Its impossible to write Futures / Tasks without generics.

Here are some statistics: In Go its impossible to write 90% of the functions listed here: https://lodash.com/docs - because they take higher order functions, which use generics. The fact that JavaScript has loops that can be used instead, yet this is still the most popular JavaScript library cannot be reconciled otherwise than by acknowledging that generics are generally useful.

Its a damn shame that the language has such flaws: the standard library and tooling are superb. And just generics can single-handedly get rid of 80% of the problems of Go (errors can be modelled with a generic Result<T> which forces you to check for error and allows chaining, generics would enable immutable data structures which are much safer for concurrent programming...)


You absolutely can implement those functions in Go, if you're willing to sacrifice both type safety and performance. e.g., https://godoc.org/github.com/BurntSushi/ty/fun

Javascript is unityped, and you can certainly pretend Go is unityped too, by using `interface{}` everywhere.

If you linked to a similar set of functions defined in, say, C++, then I'd agree that Go has no real way to achieve something similar.

> (errors can be modelled with a generic Result<T> which forces you to check for error and allows chaining, generics would enable immutable data structures which are much safer for concurrent programming...)

Generics isn't sufficient for that though. You also need sum types, which Go doesn't have.

Lack of generics isn't objectively a flaw. It's a trade off. You may disagree with that trade off though!


You don't need sum types, Result / Optional etc can be implemented in a language without them.

I can implement those in TypeScript too. Its a fast compiler, has a type system, and it supports generics.


Sum types are how Result/Option/Maybe/Either are typically implemented. The type system guarantees that inhabitants of the type contain exactly one success value xor one error value, and that accessing the value requires handling both cases, which is checked at compile time.


Sure, but you don't necessarily need sum types for Either. Its more tedious, but totally doable without them, e.g. just with higher order functions:

  let left  = a => (l, r) => l(a)
  let right = b => (l, r) => r(b)

its now impossible to consume result without passing functions to handle both the left and the right value.

  let mapRight = f => val => val(identity, right => right(f(val)))


Indeed, but now you're just being academic. Such an approach is quite bothersome. I maintain my initial criticism of your suggestion.

It's easy to be an armchair designer of programming languages. It's quite a bit more difficult to be in the driver's seat, because you have to answer the hard questions; you can't just throw feature sets against an HN comment wall and see what sticks.


You're coming from the same bias as the author of the article: Haskell and Rust are what a language should look like; Go doesn't look like that, therefore Go is bad.

Take one of your points:

> Go will never implement immutable data structures. Its impossible to write an immutable data structure library without generics.

Fine; I won't argue whether your statement is correct. But so what? That only turns into something anyone should care about if you also assume that "immutable data structures are The Right Way".

> Here are some statistics: In Go its impossible to write 90% of the functions listed here: https://lodash.com/docs - because they take higher order functions, which use generics.

This proves that Go is not built to use higher order functions. It does not prove that Go is flawed (unless you also assume that FP is The Right Way).

TL;DR: Go isn't trying to be a functional programming language. Some people think FP is the only way to go, and therefore think Go is bad. Those people need to realize that FP is not the only way to program, and to stop trying to force Go into their FP world.


I agree with most of your "so what?" point, but I think the lodash / FP thing is interesting because javascript is also not trying to be a functional programming language, and nor are the many other languages that have popular higher-order-function libraries (eg. Java, C#, Ruby, Python). There's a long trend of pulling in the ideas from functional languages that have proven to be generally useful, like lodash / Java 8 Streams style collections functions, while ignoring the stuff that might be useful but is harder to implement or work with, like purity, immutability, laziness, or sophisticated type inference schemes. Go is definitely anachronistic in not following this particular trend. Of course, that's the prerogative of its designers! Note that Go doesn't need generics to have 100% of what lodash has, it would just have to be implemented in the compiler, similar to slice, range, map, etc.

I have no problem accepting that FP is not the only way to program, but it sure would be convenient for me personally to have nicer ways to work with collections in Go.


As was mentioned before, you can absolutely do the same thing in Go that you see in lodash by giving up type-safety which javascript doesn't have anyway!.


I actually saw that post right after I wrote the one you just responded to. While it's definitely true, what makes it more awkward in Go is the necessity to translate back and forth between the "traditionally typed" and "unityped" dialects of the language. I'm not (only) referring to performance here, but ergonomics. Because javascript is unityped everywhere, a unityped implementation of those sorts of functions is natural, but because Go is mostly type-based, it is less natural and more boilerplate-y to pass and return `interface{}` everywhere. Nonetheless, it's a good point.


Or you don't give up type safety. TypeScript compiles really fast and has generics.


I'm not familiar with TypeScript, but since it compiles to Javascript, I imagine it's using type erasure. Indeed, such an implementation of generics is easy to compile quickly.

If one adopted such a scheme in Go (which would also be fast to compile), you'd end up sacrificing a great deal of runtime performance, because everything generic would be forced behind a box. Such a thing is no big deal in a language like Javascript, but for a language like Go that aims to be fastish, it's probably a non-starter.

Finding a compiler that monomorphizes generics and is also as fast as Go is probably a much harder challenge. I've had some folks claim that one of D's compilers is pretty fast, but I haven't seen any good benchmarks to support that claim. Many of the other monomorphizing implementations I've tried (Ocaml, ML, Haskell, Rust) are all pretty slow. At least, much slower than Go's compiler in my own experience.


Once you get data races in Go, you realize that immutable data structures are indeed the right way. Too bad though. That non-threadsafe, racy map is your only generic data structure.

I think that critics are generally cutting way too much slack to Go. Its a horrible language - with a decent library and excellent tooling and documentation, but still quite horrible.


> Once you get data races in Go, you realize that immutable data structures are indeed the right way.

False. There are more things in multi-threaded programming than are dreamt of in your philosophy. For some of them, immutability is very much the wrong way. It means some threads will be playing with stale data for some time.

I'm not saying that I'd want to write that kind of a program in Go, mind you. But your over-generalization is blatantly false.

And, in fact, I suspect that most people writing in Go aren't writing the kind of program where you could get data races at all. Multithreaded programming in Go (if I understand correctly) is mostly a matter of handling multiple independent data streams, not threads that need to access the same data objects.

I suspect that your over-generalization in the second paragraph is equally false, but I have less experience to validate that opinion.


Even if we assumed that (but seriously though, I invite you to at least provide one valid, complete example) your argument is valid, what you're saying is that Go is good for that subset of cases ("some of them") where immutability is the wrong way.

Does not invalidate the fact that it has zilch to offer for the cases where its the right way.


First: I didn't say that Go was good for that subset of cases. In fact, I said that I wouldn't want to write that kind of program in Go. I said that Go wasn't the wrong answer for the reason you stated, namely immutable data.

You want an example? Here's a video router for a TV station, which has multiple sources of user input (human-pushable control panels on two different data buses, plus serial data coming from multiple automation systems). You need to keep those control panels and automation systems updated with what's connected to what, even if they weren't the source of the command that changed it. And you need to keep the actual hardware switch up to date, too. And commands to the switch can fail, which you need to report back to whoever made the command. (One way the command can fail is if someone else locked an output to display a particular input.)

Faced with that problem, we implemented a single "state of the switch" data object that mutated as commands came in. But you could think about trying to implement it with immutable data. That would mean creating a new copy of the state of the switch for each (successful) command that was processed. That would mean copying a fairly large chunk of data many times a second, which would have been a challenge for the processor we had. That would also almost certainly mean a garbage-collected language which, when you're trying to respond within one TV frame (1/60th of a second), is a really bad idea. More to the point for our discussion, it would also mean that threads, which in our design only had to respond to one source of control, would now also have to handle state-of-the-switch updates pushed to them from other threads (or from some master). That seems like significant additional complexity to me. (Yes, I know that data races have their own complexity, but for our design, it was very clear how to prevent that. And if you're going to say that we could have had a separate thread receive the updates to update the control panels, now we've got a race as to who owns the hardware control bus to the panels.)


Now thats a good example!

First, a little correction: you don't have to make a copy of the entire state. This video explains the trick on how to get immutable data structures that share most of their data with their previous version https://youtu.be/SiFwRtCnxv4?t=8m39s - list are straightforward, and vectors and maps are based on the same HAMT tree-like structure.

Of course, a system with real time constrains and hardware limitations will have different optimal solutions. And yes, reference counting is the bare minimum you'd probably like for these structures (GC is even better)

However, we're talking about Go here - a language designed for writing servers that has a GC.

The solution in Haskell is actually quite nice: MVars [1] plus immutable data structure. An MVar contains the current state, represented by one such structure. takeMVar "removes" the variable - a thing which can be done atomically by the updating thread when the data becomes stale. After that, subsequent attempts to readMVar from other threads would block until there is a new updated value, to ensure everything is in sync. Finally, the updating thread does a putMVar, and all readers get the new value and continue executing.

The best part is they don't have to worry that the updating thread might start another update in parallel while they read: the data structures are immutable so the value being read is guaranteed to remain immutable. Even if the updating thread continues "modifying" the new structure in the background, it doesn't have an effect on the other consumer's version.

But yeah, all this is pointless if you have realtime constraints and therefore need super-tight control over execution time. It might be doable in a fast reference counted language, but it will also be much harder to reason about the time it will take to release the memory for the segments that aren't in use anymore.

[1]: https://hackage.haskell.org/package/base-4.8.1.0/docs/Contro...


> The best part is they don't have to worry that the updating thread might start another update in parallel while they read: the data structures are immutable so the value being read is guaranteed to remain immutable. Even if the updating thread continues "modifying" the new structure in the background, it doesn't have an effect on the other consumer's version.

If I understand what you said here correctly, this doesn't work for my example. A thread cannot continue with a stale version (and function properly). It must operate on a current version all the time (or block until it can).


You're right. For your example thats actually an error, and it wont happen if you `takeMVar` before you start working on the new value.

I'm describing a slightly different example there, where its okay to get the old data while updates are being "prepared" (e.g. every item in the dictionary is being fetched from the DB, typical for a server app). In that case, Haskell will work correctly. In Go on the other hand, reusing the data structure may result in a program crash, as Go's built in maps (which might contain that data) are not thread-safe.

(You can't even make the simplest type-safe, thread-safe mutable map that uses a RWMutex automatically under the hood. Because there are no generics)


> you realize that immutable data structures are indeed the right way

Well, that's false. Rust has mutable data structures but also statically prevents data races.


Do you mean this?

https://doc.rust-lang.org/nomicon/races.html

If that then sure, that might work too if you really need to do it. I wouldn't go so far as to say its the right way - it seems very bothersome to me.

Its more of a "yes, I'm willing to go through all this incredible pain, because I get some gain for it (constrained hardware? idk). Rust, please help me do it right."

Of course you can't do that in Go. Not only you don't get static guarantees, you can't even write a generic map with atomic access.

edit: by pain I mean this: https://doc.rust-lang.org/book/concurrency.html - and yes, thats painful compared to using immutable data structures.


> If that then sure, that might work too if you really need to do it. I wouldn't go so far as to say its the right way - it seems very bothersome to me.

There are plenty of bothersome things about "immutable only" too.

I've employed both approaches in earnest. Each have their own set of trade offs.

> Of course you can't do that in Go. Not only you don't get static guarantees, you can't even write a generic map with atomic access. You have to remember to use RWMutex every single time. No generics.

I'm quite aware of Go's limitations, thanks.

I find it amusing that you've dismissed an entire category of practice to statically eliminating data races at barely a glance. Irony, it seems. The very thing that people lament about Gophers is precisely the behavior you've demonstrated here! (Quite literally in fact. How many gophers have you heard say something like "generics is bothersome"?)

You've been polite, but snobbery is vexing, no matter where it comes from.


Fair enough. I admit to not knowing when you would prefer statically checked mutable data structures to immutable ones except for a few cases (dynamic programming arrays, fast matrix libraries, memory constrained environments).

I did use the word "seems" there though. Its not really dismissal, I would indeed like to be enlightened. In projects where I can afford a GC, I'd always take the GCed option (in my case an overwhelming majority). Same for immutable data structures (use whenever they can be afforded). Are those bad heuristics? (Afforded here refers to performance/memory constraints only)

One thing that GCed languages don't solve very well is handling other more scarce resources (file handles, connections from a pool, etc). It seems that Rust managed to solve this nicely. If only it was possible to use GC for everything except those kinds of resources (perhaps it is?), that would be perfect.


> I admit to not knowing when you would prefer statically checked mutable data structures to immutable ones except for a few cases (dynamic programming arrays, fast matrix libraries, memory constrained environments).

Those sound like pretty compelling use cases to me, and also ones that seem to be well suited for Rust. You might also consider looking at Servo; I bet its engineers could list a myriad number of reasons why immutable-only data structures are insufficient.

I note that performance is not the only trade off worth examining (to be fair, I think you acknowledged this). Another aspect of the trade off is abstraction, albeit this is fuzzier. Mutation can be more natural to a lot of folks. My pet theory is that we've built up a defense mechanism against mutation because it's the source of so many bugs; but Rust's static guarantees are worth consideration here. They remove many of the problems normally ascribed to mutability. For example, Rust not only prevents data races, but it also prevents aliasing mutable pointers to data at compile time, which defeats another class of bugs not related to concurrency at all.

My main point of contention with your comments is that you think you've stumbled on to the "right" way of doing something. In my opinion, that's nonsense. What's the point, even, to declare such a thing? Instead, focus on what the trade offs are, then make a decision based on the constraints you've imposed in any given situation. (Valid constraints absolutely include "immutable data structures are easier for me to reason about intuitively.")

> In projects where I can afford a GC, I'd always take the GCed option (in my case an overwhelming majority). Same for immutable data structures (use whenever they can be afforded). Are those bad heuristics? (Afforded here refers to performance/memory constraints only)

They don't seem like bad heuristics to me. They don't really correspond to my own heuristics, depending on what problem I'm trying to solve. (I once chose a language for a project based purely on the fact that I wanted to target non-programmers.)


Its not just that shared mutable state is hard, I'm thinking of the whole reasoning apparatus you get at your disposal:

http://www.haskellforall.com/2013/12/equational-reasoning.ht...

That indeed seems very much like something that can be called the "right way". If all functions in a given subset of the code are pure I can even imagine a tool that combines hoogle with your function's type signature and existing types to suggest how to finish writing your function (its just a graph search with nodes being the types and functions as the links). edit: seems like I don't need to imagine it - https://github.com/lspitzner/exference

Rust's way seems to me like they encode all the tediousness of dealing with shared mutable state into the type system. This is good, I guess, if you need to keep doing what you've always been doing but in a much safer way.


> That indeed seems very much like something that can be called the "right way".

No. It's just another useful tool in the toolbox. It comes with costs. Sometimes you don't want to pay them. Stop trying to monopolize "the right way."


I am not trying to monopolize "the right way". We were originally talking about Go, a language with a garbage collector made for writing concurrent servers. This is an area where immutable data structures are a no-brainer "right way" to avoid data races in the majority of cases, and I was expressing my frustration at the inability to write them in Go.

I really have no idea how the conversation became one about writing browser engines, embedded systems or systems with realtime constraints in Rust :)


> This is an area where immutable data structures are a no-brainer "right way" to avoid data races in the majority of cases

I continue to find your phrasing extremely off-putting, condescending and snobbish. I suggested a few ways of wording your concerns better, but it seems you're intent on remaining a snob.

I disagree that anything about your suggestion is a "no-brainer."


> I continue to find your phrasing extremely off-putting, condescending and snobbish. I suggested a few ways of wording your concerns better, but it seems you're intent on remaining a snob.

Thats a bit over the top, but I'll concede that my wording needs work. I enjoy discussing concrete problems and projects - hopefully fixing this will help get more of that. You did make some very good points as to why we avoid mutation, and I will try and evaluate Rust in more depth.


I don't think this is a good analogy at all. The article is more akin to walking outside in December and stating "here's why December is too cold for my liking". The article is just a series of observations on the author's subjective opinion about a language, along with reasoning on how that opinion was formed. It should be read as "why Go is not good (in my opinion)".

This is a major problem that I struggle with: saying "I think" and "in my opinion" gets old really fast and makes everything you say sound waffle-y, but if you don't say things like that, some people will interpret your statements as if you are claiming to state objective fact.

My sense is that such a diminishingly small amount of the things people discuss is actual fact that I can usually prepend "I believe" to any sentence I read. I am pleasantly surprised when I find that this rule fails to work for something, but that doesn't usually happen on the internet.


I agree, but then people can't complain much when other people, like the Go team, have different opinions and make different decisions than they would. We should encourage experimentation rather than discourage it. No one is forcing anyone to use Go, and there are plenty of languages that have generics, are immutable by default, etc, etc.


> We should encourage experimentation rather than discourage it.

I made this point already, but to reiterate more succinctly: we should definitely do that, and we should definitely also write about our thoughts on how we think those experiments are going, which is exactly what the OP is doing.


Sure, but don't be surprised when people have different opinions and aren't convinced by anecdotes. Reading the comments here you can find tons of people that are shocked, SHOCKED, that not everyone agrees with them on generics, immutability, etc. I think it's important that we remember that these are all opinions with no empirical data to support them, on either side.


Calls for rigor are unproductive, I think. The unfortunate reality is that an experiment with the rigor you want would be prohibitively expensive. The big differences in productivity I suspect are going to be in larger projects over longer periods of time. You also can't figure anything out from small sample sizes, so you would need to take four large teams, split them randomly into two groups of two, and have each time solve the same large problem over a long period of time.

Unfortunately, since we can't afford the rigor, we have to make do with anecdotes and less powerful studies.


I agree, but then we shouldn't assert conclusions with the level of certainty expressed in this blog post and many of the comments here.


Developers are also far more productive when using a programming language than when developing a programming language. So let's not have programming languages.

The thing is, a million developers can use a generic library, but only one has to develop it.

Excuse me now, I'm going to tune out a million boring arguments of the form "higher level languages are amazing ..." and go back to debugging IBM 360 assembly language program.


The usefulness of generics is easily demonstrated by Go itself — it has generics. It's only available to a bunch of magical functions (new, make, len, append), types ("chan", "map", arrays and slices are all generic types) and keywords ("range"). But it's there, in plain view.

Nobody can objectively argue that these aren't useful, or that they could have been implemented in a non-generic way without destroying the language. Wouldn't the utility transfer to the developer's own code?

As for "let Go do its thing", I would argue that it already has been done: Plenty of pre-existing languages don't have generics. The lack is always felt, including in Go's direct precursors (such as Modula-2 and Oberon, languages that people later hacked generics onto because their real-world ergonomy as designed by Wirth wasn't great).


> Any critique of Go seems to be met with angry pitchforks in this place.

You can say that about any language. The people that like the language will always defend it. e.g. PHP, C, Ruby. They all have flaws and yet when one talks about their shortcomings, the people get defensive.


But those languages are universally accepted as bad (well aside from Ruby, I don't know much about it because I already know Python and never felt I needed a different syntax for pretty much the same thing (arguably less used))

If someone started developing a language today and came up with C or PHP they would be criticized for many of the pitfalls of the mentioned languages and there would be a lot of improvements that could make those languages objectively better. But they are what they are because their design decisions were made under totally different environment than today and the advantage of using them is that you get to leverage everything built since then (well this argument is much stronger for C than PHP because PHP has alternatives that could be viewed as strictly better).

But Go is a new language. It's trying to sell itself as a better solution to existing problems so the level of criticism is going to be (justifiably) much higher - it doesn't just need to meet minimum usability bar - it needs to be better than existing defaults, and significantly so to justify the cost of switching - both in terms of learning and porting.


It's trying to sell itself as a better solution to Google's existing problems[1]. That's the keyword (no pun intended).

Just so happens that some developers at-large perceive an overlap, correct or incorrect, between their problems and Google's problems. So they use the language, and they're happy with it.

Go is not trying to sell itself as a better general solution. It was built within the context of Google's problems.

[1] https://talks.golang.org/2012/splash.article


>It's trying to sell itself as a better solution to Google's existing problems[1]. That's the keyword (no pun intended).

That's a good point - but if that is the case then any comparison to general purpose programming languages is pointless.


I think the point they're making is that the Go community is unusually pitchfork-ey. Having used Go since pre-1.0 days, I certainly agree; there's a very strong sense of, "if you want <feature X>, you're doing it wrong" - despite legitimate concerns, like the ones outlined in this article.


It reminds me of the Java community when Sun stopped adding significant features for years after 2004. Every feature which wasn't added was widely considered by most Java developers as being in the name of protecting us from ourselves. When Java 8 came out suddenly some of those bad/evil features became flavor of the week. There's a tendency to try and convince yourself that the language designers made the best decisions at every turn, which isn't a healthy way of thinking critically.


I've heard this same criticism levelled at the Clojure community, as well.

Honestly, I can't think of any language community that's developed such a reputation for pitchforkiness towards suggestions as the Go and Clojure communities.


What a strange argument. The difference is that clojure is about as extensible a language as it's possible to have.

A macro system in a homoiconic language allows you to implement many types of semantic sugar or things that would be full-on 'language features' in other languages as a simple library. See core.async: https://github.com/clojure/core.async


Steve Yegge has written a lot about this. Much of his writings on the subject are in these mailing-list posts: https://groups.google.com/d/topic/seajure/GLqhj_2915A/discus...

Make sure to expand all of his posts on the thread, because he goes back and forth for a while.

Much of it has to do with the community's attitude towards macros: there's an attitude of "macros are bad and you shouldn't use them", and people who write macros are often jumped on by the community.

Here's one sentence of Steve's that sums it up:

> When people announce: "hey, I made a loop macro!" the response absolutely can NOT be: "why can't you just write it as a series of maps and reductions?"

And another:

> If Clojure people all said "of course you can use macros! Of course you can use CL-style loop facilities! It's your code, do what you like! Feel free to use nonlocal exits all you like!" -- well, then it would be a lot closer to a Yes language.

The problem is the way the community treats people who don't follow the prescriptive norms of the core Clojure people (norms which are often in conflict with the broader Lisp community).


Using 'macros' is no substitute for having taste.

The Clojure developer had a certain vision for a new language - otherwise he could have just continued to use Lisp (which he earlier used for a few years). It might be useful to respect that and develop Clojure along this vision.

Something like Common Lisp follows a different vision. Common Lisp was a large community effort and the language EXPLICITLY had been designed to be morphed by the user into widely different shapes. That's why it reserves characters to the user, why it has a programmable reader, why it has procedural macros, ..., and why CLOS has a Meta Object Protocol. Probably that was also too much flexibility.

But even with Common Lisp, because it gives you little guidance how to use it and there are a gazillion programming styles possible, you need to develop taste. You can design ugly code and extensions and you can learn to develop better code and extensions. Common Lisp supports LOOP, because it was already there (it was introduced with Interlisp in the 70s, then brought to Maclisp and Lisp Machine Lisp) and there wasn't a better alternative at that time.

The 'best' iteration construct in the Lisp world is Jonathan Amsterdam's Iterate. But that would also not fit well into Clojure... But Iterate fits well into Common Lisp and works nicely as an alternative to LOOP.


> You can say that about any language. The people that like the language will always defend it. e.g. PHP, C, Ruby. They all have flaws and yet when one talks about their shortcomings, the people get defensive.

thats more the developers than the langauge. I'm a diehard ruby guy, but when people talk about its shortcomings or the benefits of another language, i listen. It only clarifies what can and cannot be done, and what would be better done another way or in another language.

I think its part the developer, part the community around it.


Talk about the (legitimate) Haskell shortcomings and you'll be met with open, honest acceptance, mitigation strategies, and academic discussion.

IRC, /r/haskell, whatever.


Rust too. The bunker mentality of the Go community is not unique, but its fervency, I think, is not currently matched anywhere else.


I don't know about that.

C programmers can be defensive when it comes to changes in the core language, but they're very receptive of all kinds of third-party libraries, you don't see much of "if you want <feature X>, you're doing it wrong". Take, for example, object-oriented programming. If you try to confront a seasoned C programmer about how C sucks because it doesn't support OOP, instead of being told that OOP is bad and C shouldn't ever support it, you'll most likely get a response saying that C does support OOP with the proper libraries, such as GObject, and pointing out that large C projects like the Linux kernel are already object-oriented.

Go, on the other hand, just pooh-poohes the concept. Clojure has a similar negative attitude. For example, the Clojure community is notoriously hostile to any suggestion of implementing Common Lisp's loop macro. There's no, "well, that's the beauty of Lisp, you can always write your own macros if you don't like what comes with it". Instead, you just get vitriolic condemnation of the whole idea of such a construct. If you write your own loop macro and post it in a Clojure community, the response is typically "why would you even think of writing such an abomination, what is wrong with you?", which is a disgustingly hostile way to treat people who are volunteering their time to contribute to the community.


Is the bunker-mentality a recurring pattern for all Google OSS?

I've heard people talking about it in Dart, Angular, and V8/Chrome, and I'm not sure if it's true or not.


I don't think it's Google specific. I'm quite a Scala fan but there was a time when many of us, probably myself included, had an unhealthy bunker-mentality.


I get where your coming from (I think... let me know if I've missed your point), but I think the purpose of Go's type system is to offer some safety while emulating a dynamic language.

In some cases, rigid type-safety is necessary, but I have trouble taking this criticism seriously while languages like Python enjoy extreme success. If Python can be insanely useful (and acceptably safe), then why not Go?

tl;dr: Go is not -- from a practical point of view -- a static language. (Note to pedants: I'm talking about Go's use, not it's formal nature).

Better tl;dr: Think "Python's type annotations" rather than "Rust".


Then it needs to stop calling itself a 'systems programming language.' Or maybe it doesn't, but others need to stop calling it that.

To me, it's a replacement for Java, not C++. I think that's a reasonable target.

Mandatory GC, lack of generics and therefore rampant use of downcasting & duck typing -- these make it difficult to write safe and fast code for systems level or embedded type work.


>Then it needs to stop calling itself a 'systems programming language.'

Why must a systems programming language be static and strongly typed?

I don't mean to be flippant, but to me, 'systems programming language' means 'a language that facilitates productivity for systems programmers'. By that description, Go certainly qualifies.

>Mandatory GC, lack of generics and therefore rampant use of downcasting & duck typing -- these make it difficult to write safe and fast code for systems level or embedded type work.

Point taken, but not all system's programming requires such extreme safety.


ASM isn't strongly or statically typed, so that's obviously not a requirement for a "systems programming language".

The requirement most people mean, when they say "Go isn't a systems programming language", is the ability to acurately control execution. With Go, you can't, because of GC.


Define "systems programming" (or "systems programmer").

Google's definition seems to be "building large systems". For the types of large systems Google wants to build, Go works very well.

But others define it as "building operating systems". Go is horrible for that because of garbage collection and inability to directly access the hardware.


Ah, maybe that's the misunderstanding here.

I can't comment on "building operating systems", but it seems improbable that a GC should never be used in such endeavors.


Then good that it stopped years ago:

https://golang.org/doc/

>>> The Go programming language is an open source project to make programmers more productive.

Go is expressive, concise, clean, and efficient. Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines, while its novel type system enables flexible and modular program construction. Go compiles quickly to machine code yet has the convenience of garbage collection and the power of run-time reflection. It's a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language. <<<


Its concurrency mechanisms make it easy to write programs that get the most out of multicore and networked machines

But more realistically, it makes it easier. Those things still manage to be hard at some point.


"expressive"

"concise"

"novel type system"

"flexible type system"

Just a few things that need correcting. Really ... "novel type system" ...


Ok, well that's funny. You pick "novel type system" apparently because you find it especially absurd, but I find that that's the only accurate on on the list.

Go isn't expressive, nor concise, nor does it have a flexible type system. But if any of its claims are true, it's that its type system is at least a little novel.

Sure it's not the most exotic type system out there, but its interfaces are very useful and they aren't found in many other languages (not implicitly satisfied interfaces, that is).


Go's brand of duck typing isn't exactly a novel type system.

If I had to describe Go, I'd say "I like Hoare's paper" combined with "I read the first 10 pages of the book 'my first compiler'".


> But if any of its claims are true, it's that its type system is at least a little novel.

Are you saying it's novel because it picked the worst features of past type systems? ;-)

Technically, it is indeed novel, but it's novel because making these choices was pretty dumb in the first place so nobody made them...


C doesn't have generics, and is certainly a systems language. I don't think you have a strong point here. Systems != embedded.


> C doesn't have generics, and is certainly a systems language.

C doesn't have a mandatory runtime, ubiquitous dynamic dispatch or a GC, and it lets developers decide whether to stack or heap allocate.

cmrdporcupine also isn't talking about generics, they're replying to the assertion that

> Go is not -- from a practical point of view -- a static language


C programmers use void pointers for generics. :)


And so, basically, does Go.


Except there is a fairly large difference with interface{}, in that it is typed. interface{} is more like Object in java than void* from C. If you incorrectly cast an interface{} to a type it's not, it will be a runtime error.

Details: http://research.swtch.com/interfaces


> If you incorrectly cast an interface{} to a type it's not, it will be a runtime error.

Sidenote, you may or may not have meant a panic - so i just want to clarify.

Casting an interface to a type incorrectly will not cause a panic/crash, assuming you use the proper syntax.

    // Will panic
    foo := myInterface.(BadType)
    // Will not panic
    foo, ok := myInterface.(BadType)
In the latter, you simply check if `ok` is true or not. If not, the interface value does not implement the given type/interface.


Go doesn't have casts. It has conversions and type assertions. This is a type assertion.


C has generics as of C11 via '_Generic'. Google can tell you more, and here's a blogpost with examples: http://abissell.com/2014/01/16/c11s-_generic-keyword-macro-a...


Well, yes, kind of. It's not something you would really use a lot. It doesn't help you with generic types, but only with generic functions. Since it only makes sense when coupled with macros, it really only helps with functions that are written out but accessible by the same name. For example, selecting sinf() or sin(), or writing a max() function.


No chance to replace Java, when Java is all about ecosystem, tooling, maturity and talent pool. And it has generics.

I mean I am no Java expert, but e.g Android Studio is miles ahead anything the go team will be able to ship in the next few years. And I don't think they're even focusing on building such tools since they seem to be stuck on building a debugger right now.


Java doesn't actually have runtime generics. Just type erasure syntactic sugar. ArrayList<Integer> is the same type at runtime as ArrayList<Foo>.


Sure, but I'll take it if that means I can at least use some type of generics. :)

In my experience it's decent, even if not as powerful as C++ templates. Being able to specialize containers is a basic comfort for me.


This strikes me as a distinction without a difference, in the present case. How is this meaningfully different from the programmer's perspective?


You can't distinguish them at runtime from each other.

Your memory consumption is higher, because type erasure requires elementary types to be boxed. Since the boxed value is itself allocated somewhere else, there's indirection. Indirection means CPU stalls. Another consequence is data cache pollution. There are just 512 of 64 byte L1D cache lines.

Other than that, I guess nothing.


But if you're worried about that amount of memory consumption, why are you using Java? If you need that much control over memory, Java may not be the language for you.

Ditto if you're worried about the performance hit from cache misses.


Go was never a systems programming language as soon as it made the decision to be garbage collected. Don't get me wrong: in many cases I like GC but you'll never dethrone C/C++ for many use cases with a GC language. I find it interesting that many Go pundits I speak to just don't seem to get this.

I wouldn't even say Go's target market is Java. I'd say it's actually Python. Rob Pike has spoken about this [1].

I like Python. It's fun but for anything nontrivial I've become disillusioned with it because the supposed productivity gains are offset by having to write unit tests for spelling mistakes and typos. So I like Go for this purpose. It's not quite as expressive as Python but it's in a sweet spot IMHO.

[1]: http://commandcenter.blogspot.com/2012/06/less-is-exponentia...


The Oberon and LISP machine people would disagree given they wrote whole OS's and platforms in GC languages. The trick is to support manual control & unsafe where necessary. I know it can work for Go with small changes because it worked for Wirth languages that Go was based on.


Hoping latest in type hinting along with IDE support will really kick this one home: https://www.jetbrains.com/pycharm/help/type-hinting-in-pycha...

I'd love support for "check this project" that would essentially be similar to compiling a Go project.

Python is awesome for so many things: - Quick project iteration - Web development - Testing (test suite is actually pretty sweet!) - General purpose programming and scripting

Currently, not the most productive tool for large projects or complex projects, or projects where static analysis pays huge dividends (usually this fits in one of the previous two categories anyway).

Doesn't mean it has to be: but it'd be really nice to have...


I find PyCharm addresses that Python weakness nicely. I'm sticking with C++ & Python pro tem. When Rust is a little more mature I think I'll invest.


Why do you need to write unit tests for spelling mistakes and typos in Python?


It's one class of errors that doesn't typically happen with a static compiled language, people like to feel 'safe' about the dullest things but typos can be found pretty easily by running the code you just wrote in a REPL or at least structuring your application so that you can quickly get it to the same state to test live... But honestly you can just repeat the typos in your tests, or make new ones, in any case for me at least it's a class of error that comes up very rarely regardless of whether I'm using mitigations like auto-complete. And lastly I'd think if you were unit testing "properly", aiming for good coverage, which unit testing advocates will say you must do regardless of a static or dynamic language, your unit tests should catch various typos as a side-benefit. Testing things like int in -> int out or "reading this function it calls these methods on foo, here's a mock foo and we're just going to make sure all of them get called as expected and I didn't misread/mistype either the code or the test code" seems insane.


I don't think every language needs to be rigid - something like TypeScript's gradual, unsound typing can be very valuable, and I certainly think there's a place for a language where casting is easy and idiomatic. But you can do that and still enable generics, covariance and the like - as TypeScript in fact does.

What's incredibly infuriating to see a language that clearly has everything it needs to offer generics - indeed it clearly has a generics implementation already written, since the builtin types are generic - but it won't let me use them.


I wouldn't call myself part of the Go community in any real way. There are people doing much more than I am. I personally like Go. We use Go at Creative Market (and do so increasingly). I will say the my impression of Rob Pike and some members of the community is that they just put on earmuffs and say, "no, you're doing it wrong, you don't need that" (generics). I think this article clearly demonstrates that working outside the type system be design sets the language back.

For those who are still arguing about what Go replaces (C? Java? Python?), I try to look at it practically. In the interest of productivity, I might want a language with garbage collection. Ok, so let's set C aside. In the interest of performance, I might want a compiled language. Ok, so let's set Python aside. And maybe I don't care a great deal about portability, so set JVM languages aside.

Go feels like a very practical language, and that's because it is a practical choice for a lot of people.

I hope the Go community does a better job of embracing feedback and caring about language design. It's young enough that breaking improvements could help in the long-term, even if they hurt in the short-term.


No one says you don't need generics. Here's rsc explaining why it's not in the language: https://news.ycombinator.com/item?id=9622417

No piece of feedback is ignored, I'm not sure how you've gotten that idea.


It's funny that you set Java aside based on something that it does which you do not need. I am not convinced that Go solves any problem that Java can't solve, other then it doesn't have a baggage of 20 years worth of bad open source libraries and a culture of writing 20 layers of abstraction where no layers are needed.


"it doesn't have a baggage of 20 years worth of bad open source libraries"

Give Go some time. It's inflexibility will inevitably yield some interesting baggage of its own.


SO CLOSE! You almost got there. I DO want a language that replaces C, Java, Python, etc. It must have: garbage collection, compiles, and compiles to single static executable with no run time dependencies outside a clean install of the OS, and it must have no requirement for me to think about memory layout/management.

BUT, it must also have a nice set of high level abstractions. For web and application level stuff rather than system level stuff.

So, what's that magic language? Really!

Go is almost that language, but it "missed it by that much."


Generics aren't likely to be fixed any time soon, but it's not that the authors aren't listening. Here's Russ Cox's reply on the issue

https://news.ycombinator.com/item?id=9622417


Same for me. I tried really hard to enjoy Go, but it became an incredibly frustrating experience.

The language itself was frustrating to me in many of the ways outlined in the article.

The community was similarly frustrating.

One anecdote that really stuck with me was when enquiring about explicit language support for the error handling pattern. E.g.:

    f, err := os.Open("foo.bar")
    if err != nil {
        log.Fatal(err)
    }
It is something you perform extremely frequently, and it takes up an incredible number of lines of code. The response from the Go team was (paraphrased) "We don't see the value in making this pattern more terse when you can just write a macro for your editor to automate it".


And this is a clever response if you understand their point: messing with the language it's rarely the correct answer. Do you want to remove some code? How about a function? How about hiding boring details behind more elegant objects? How about using a macro in the editor? And so on... The answer is not (from their perspective) "put some shit in the language".

The problem with Go is that is very idiomatic. If you want to work with Go you need to learn its way, and somehow you need to accept the trade-offs behind its design.

If you do not see any advantages in any way, probably it's not the right tool for you, and that's ok. No problem at all.


Well, as a mainly C++ programmer, I'd trade exceptions for that any day.

Much easier to understand what is going on, no need to find out if something in the call graph of the method I call throws or not. Also error handling code is where the error occurs, not some completely different place.

Properly handled exceptions aren't any less code anyways. Unless you can just drop everything on the floor.


As a Scala coder, I'd say you can have it both ways by encoding the result as type that can have success or failure cases. No fancy control flow needed, just pattern matching.


Yeah, and you can implement that in C++ very easily, too. Googlers might recognize StatusOrReturn, and similar macros.


You don't get pattern-matching though, and I expect the implementation would be somewhat gnarly without native tagged unions. At least now that C++ has lambda expressions you can get all the nice HoF so the usage side of the equation has become less troublesome.

Still, a far cry from

    data Result a b = Err a | Ok b


I think you mean you'd trade exceptions for that.


Fixed.


This is exactly the pattern that Erlang handles the best:

1. Expect you call to os.Open to not fail 2. Then do stuff with it 2bis. Or the running process will crash, the error logged, bad state won't mess up the flow and a dev can always hot-patch it


The places I've seen Go used are microservice projects that would have otherwise been Python or Node. In context, Go's type system is an absolute dream. Is it more powerful then, say, Scala? Probably not. But that project was never going to use Scala, it was going to use Python, and Go gives us safety that Python doesn't.


That's not entirely fair. It legitimately solves a lot of issues - concurrency namely - even if it is hindered by its type system. I will bet my long term money on Rust, but where Go works, it works well.


I thought the problem it was supposed to solve was to provide a relatively simple language, that performs reasonably well, and which produces statically compiled artifacts to make deployment much easier and lighter-weight?

Go seemed to be borne primarily out of an ops-driven ethos. The fact that it's sort of stupidly become the current darling language of Silicon Valley hipsterdom isn't really Go's fault.


> My impression of the Go community (both within Google and outside of it) is that there is a very ... moralistic? .... sense of "You don't really need that, we know best" going on.

Then you weren't really listening, tbh. Everyone on the go team has acknowledged the usefulness of generics now. It's just that so far no good solution for the associated tradeoffs has presented itself.


> My impression of the Go community (both within Google and outside of it) is that there is a very ... moralistic? .... sense of "You don't really need that, we know best" going on.

Thats not limited to the Go community. It seems to be an attitude that is sweeping the second(?) generation of FOSS developers.

Seems like just working on code is not enough any more, it has to have some kind of social/fixing-the-world angle.


There is a whole new generation of FOSS/hacker folks younger than me (I'm in my 40s) that I can't relate to. It's so much an ego and self-marketing thing ; everything you do is blogged, you do the speaker circuit, you're judged in interviews by how many stackoverflow questions you've answered, what you have in your github repo, etc.

It makes me feel very old. I just want to keep my head down and hack.


There are many, many people who do just that.

I keep my head down, and hack. I don't care about those circuits. And as a result, I'm not as visible.


I too am in my 40's, I work with many of the "next-gen" folks in their 20's, and by and large, they just want to keep their heads down and hack too.

Of course there are the superstars that you hear about, who are super active, writing new languages and libraries and whatnot, but I would say that is the extreme end of the bell curve.


I've met a lot of people with your attitude both in open source and in industry, young people and old people alike. You want to keep your head down and hack? That's another way of saying things like, "I just don't want to deal with office politics." Unfortunately, the world is not set up to make you happy. Politics is just the reality of being human, and people who acknowledge that humans are political will end up getting more work done than people in denial.

Politics isn't a dirty word, it's just the reality of having more than one person on the planet.


"You don't really need that, we know best"

This is true. And what's always been funny to me, is the predecessor language(s) for many Go programmers, such as Python, had people saying similar things about how you don't really need static types, we know best. Then suddenly they saw the light.

All of which is to say, I've noticed that many people who enjoy Go come from languages which Go is a step up from rather than a step down. And that's good for them, but compared to what else is out there, it is a step down.


I think they just made their line in the sand. Any criticism of the lack of generics has to take into account the arguments already made for its, as far as I know, deliberate omission.

Every so often people start grumbling about C. They call it a terrible language because the specification contains undefined behavior (and less frequently because they removed the linter from the compiler early on). These arguments are well known and have been addressed over several decades. Any complaint about undefined behavior has to account for the fact that said omissions are intentional and have been well-argued for by the ANSI committee and community of C compiler developers. If you can't do that then you're complaints are just going to fall on deaf ears: you haven't contributed anything that we don't already know.

If anyone wants to introduce generics into Go they're going to have one hell of a debate on their hands. I believe the reasons against it are firmly established and it will never happen. I may be wrong but any argument for their inclusion has a lot of work to do.

This doesn't make Go a bad language. It's probably just not the right fit for your purposes if you really need generics. Such abstractions are not a universal property of languages. I get by fine in C without them... but I prefer C (or Common Lisp) because if I did want them there are good libraries to give me those features.

I don't know when it became fashionable to have such opinionated languages but I tend to disagree with most of them so I just avoid them for the most part.


What do you think about goroutines and channels?


Using go for goroutines and channels is a bit like using Perl for regular expressions. The features have been added in a way that makes them easy to use and serves as a nice idiomatic platform, but fundamentally it's functionality other languages can provide via library support.


> but fundamentally it's functionality other languages can provide via library support.

Implementing goroutines and channels requires language and runtime support for green threads that are n:m multiplexed on top of native threads. It can not be implemented as a library in most languages, at least not efficiently. Any language with thread support can set up threads and put a concurrent queue between them, but that's hardly the same thing.

Languages such as Go, Erlang and Haskell do this. Interestingly, early versions of Rust had green threads (iirc) but later migrated to using native threads only.


Any language that has continuations, or at least thread-safe coroutines, can implement goroutines and channels. This includes scheme and lua. Also any language where the stack can be directly manipulated can implement those thread-safe coroutines, so that opens up C/C++ and perl and possibly some others.


Yes, all the languages you mention have the necessary language and runtime support required, probably a handful of others too (but by no means every language out there).

C and C++ are a bit different because you need to resort to assembly and know details about the target arch to do stack switching but that's acceptable.


It's been done in C; check out libmill[0], which even matches the syntax pretty well.

[0]: http://libmill.org/


If "it" includes parallelism then no, libmill has not done it:

"Libmill is intended for writing single-threaded applications." http://libmill.org/documentation.html#multiprocessing


Yeah, you can do this in C if you do stack switching with a little bit of assembly. It's kind of doing a custom runtime environment for C. Not many other languages can do this.


No ? Qt, Gtk, ... all do it.


Huh? Care to elaborate on this? As far as I know, GTK (and Qt IIRC) use a single threaded event loop. That's not at all the same thing (albeit can be used for similar things).


Qt and Gtk's single threaded event loop are akin to what Go calls it's scheduler. That scheduler in Go is also (partially) single threaded, but event handlers run in other threads.

Having event handlers run in separate threads is very much supported in both Qt and Gtk (they can't be UI event handlers in quite a few cases, but network events and file reading in separate threads scheduled by the central event loop like in Go is not a problem).

I will say that it's much better organised and with much less caveats in Go.

And a point of personal frustration : both Qt and Gtk support promises through the event loop, Go does not. I find that a much more natural way to work with threads.


Go scheduler is not single threaded (whatever that means).


Go's scheduler is a single threaded event loop, like every other scheduler on the planet. It runs, sequentially, in different threads which makes the situation confusing, but it's still single threaded.

It's also cooperative. An infinite loop will effectively kill it (a single infinite loop will kill it before Go 1.2 I believe, but now you need enough of them). More importantly, there's a number of simultaneous syscalls that will kill a go program.

I like about go that it's moving the OS into the application. The thing is, Go's OS is not a very good one. It doesn't have the basic isolation that OSes provide. I hope it will improve.


> Go's scheduler is a single threaded event loop

No, it isn't.

> like every other scheduler on the planet

This is not true either. In fact it doesn't make sense. Schedulers are not single threaded event loops. The scheduler (any scheduler) is entered in various scenarios. Sometimes voluntarily, sometimes not. Sometimes the scheduler code can run concurrently, sometimes not. Sometimes the scheduler code can run in parallel, sometimes not.

The Go scheduler is both concurrent and parallel.

> It runs, sequentially, in different threads which makes the situation confusing

I don't know what this statement means. The Go scheduler certainly runs on different threads. So what.

> It's also cooperative.

Actually it's not purely cooperative, it does voluntary preemption, very similar to voluntary preemption in the Linux kernel. The check happens in every function prolog.

> More importantly, there's a number of simultaneous syscalls that will kill a go program.

There's self-imposed user-configurable limit that defaults to 10000 threads for running system calls. The limit has nothing to do with the Go scheduler, it can be set arbitrarily high with no penalty.

> It doesn't have the basic isolation that OSes provide.

The most basic isolation provided by operating systems is virtual memory. Go is a shared-memory execution environment, so this doesn't apply. What other "basic isolation" is provided by operating systems that's missing from Go?


> I don't know what this statement means. The Go scheduler certainly runs on different threads. So what.

It means that some of the data structures the scheduler examines on every run are shared data, with locking. That makes it effectively single threaded, even if it technically runs on different CPUs (at different times). Put another way it means that it'll never run faster than a single-threaded scheduler would.

> Actually it's not purely cooperative, it does voluntary preemption, very similar to voluntary preemption in the Linux kernel.

You mean preemption inside the linux kernel, in some kernel-space threads ? Because it sounds very different to preemption of applications.

> The check happens in every function prolog.

So it's cooperative. The standard that is normally used is simple : does "for {}" crash("block" if you prefer) some part of the system ? On the linux scheduler the answer is no. In Erlang the answer is no. On the Go scheduler, the answer is yes.

On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too (and the JVM comes quite close to this ideal, some JVMs actually have it even).


> It means that some of the data structures the scheduler examines on every run are shared data, with locking.

This is true for every scheduler, not only for the Go scheduler.

> That makes it effectively single threaded

It would limit parallelism to one, if there was a single lock. This used to be the case, but now the locking is more finely grained. But this only matters if there's lock-contention anyway, which is not the case for current Go programs.

> single-threaded scheduler

Again, no such thing as a single-threaded scheduler. Even when talking about systems with a big kernel lock, or with a global scheduler lock. The scheduler is not "single-threaded" or any other term like that because the scheduler is not a thread, it's not an independent thing, it only runs in the context of many other things.

> Put another way it means that it'll never run faster than a single-threaded scheduler would.

As mentioned already, this is not strictly trye for Go, but this matters more for thread schedulers in kernels, less so for the Go scheduler, mostly because the number of threads executing Go code is very restrictive, maximum 32 threads at the moment. It's very likely that this situation might change. For example, the SPARC64 port that I am doing supports 512-way machines, so I'd need to increase the GOMAXPROCS limit. Then maybe we'd have more lock contention (I doubt it).

It's true that the scheduler will probably not scale this well, and will need improvement, but it's unlikely it will be because of lock contention.

> You mean preemption inside the linux kernel, in some kernel-space threads?

Yes, voluntary preemption inside the Linux kernel, not preemption of user-space threads. The Linux kernel can run a mode (common and useful on servers) where it might yield only at well defined points. The name is a misnomer, this is not really preemption, but it's not cooperative scheduling either. It's something in the middle and it's a very useful mode of operation, nothing wrong with it.

> So it's cooperative.

It has the good parts of both cooperative and preemptive scheduling, but yes, it's certainly cooperative.

> The standard that is normally used is simple : does "for {}" crash("block" if you prefer) some part of the system?

Not with GOMAXPROCS > 1, which is now the default on multi-way machines (all machines).

> On the Go scheduler, the answer is yes.

Only sometimes. This is fixable while still preserving voluntary preemption, since the voluntary preemption-check is so cheap that you can do it on backward branches if you really need it. This wasn't done since this wasn't a big problem in practice, even with the old GOMAXPROCS=1 default, but there's room for improvement.

> On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too.

I don't understand the analogy. It is not clear what "crash" means here, and it is not clear how it would apply to a runtime environment. All that stuff, forkbombs, etc, means that you can configure the system so arbitrary code can't affect the system in those particular ways.

But for a language runtime you don't have arbitrary code usually, you control all the code. So I don't understand how any of these would apply.

Coming back to the scheduler. There's always room for improvement. Until relatively recently, the Go scheduler barely scaled past two threads! (although not because of lock contention). Now it scales really well to (at least) 32 threads. There are still improvements to be made, and I am sure they will be made. I was just addressing the "single-thread" issue.


I see we mostly agree, but one thing here is a glaring error :

> It has the good parts of both cooperative and preemptive scheduling, but yes, it's certainly cooperative.

For me, the best part of cooperative scheduling is that you can work entirely without locking shared data structures, because you get "transactions" for free. This means it's rather difficult to get data races, deadlocks, etc. Go's scheduler certainly does not give you that, trades it for spreading work over different cpus.

So it has problems:

1) necessity of locking, using IPC mechanisms, ... (like preemptive schedulers, and let's face facts here : channels aren't enough in real world apps)

2) everything gets blocked by large calculations (like cooperative schedulers)

3) more generally, easy to crash by misbehaving thread due to unrestricted access shared resources (not just cpu) (like cooperative schedulers)

And advantages:

1) Actually uses multiple cpu's/cores/... (like preemptive schedulers)

2) integrated event loop that scales (like cooperative schedulers)

If you want to see a programming language with a "scheduler" that doesn't have the bad parts of cooperative schedulers, check out Erlang. If you attempt to crash erlang with infinite loops, bad memory allocation, ... (on a properly configured system) that just won't work, the offending threads/"goroutines" crash leaving the rest of your program running fine. The offending threads will restart if you configure them to do so (which is really easy).

The same can be achieved, with much more work, on the JVM, or, also with much more work, with python's "multiprocessing" library, part of the standard library.

> > On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too. > I don't understand the analogy. It is not clear what "crash" means here, and it is not clear how it would apply to a runtime environment. All that stuff, forkbombs, etc, means that you can configure the system so arbitrary code can't affect the system in those particular ways.

Crash means that the system/"program" doesn't respond (in a useful manner) anymore.


+1 on this. Clojure's core.async[0] is the perfect example of an implementation of CSP as a library.

Even JS can be used to implement such concepts via the use of generators[1].

[0] https://github.com/clojure/core.async

[1] https://github.com/ubolonton/js-csp


I used to feel this way until I realized the limitations of core.async. In Go I don't have to worry about whether the particular functions I'm calling, especially IO-related, are blocking or not, as Go will create new lightweight goroutines as necessary to deal with all that. With core.async, if I use blocking IO inside of a coroutine I risk causing thread starvation. See http://martintrojer.github.io/clojure/2013/07/07/coreasync-a...

Maybe things have changed since 2013, but I feel like this is a fundamental limitation of running on the JVM vs what Go can provide in its runtime.

Edit: Also, it appears to be much easier to simply "run out" of Clojure coroutines than Go goroutines, but perhaps that's also changed. Anyways, my point is that by core.async operating as a macro you still can't overcome limitations of the underlying runtime, whereas Go's runtime was purposely-built to support goroutines.


Small remark: Go I/O layer is safe to use only with network I/O. Only network I/O plays nice with goroutines.

File I/O or everything else treated as syscall by Go runtime might turn you program into 10k-os-threads-monster. Scheduler will be creating new OS threads to replace those locked on syscalls until thread limit is reached and whole program crashes. Only way to prevent it is to restrict your syscall layer into fixed-size goroutine pool.

I had an interesting case recently - my app serves some data from tons of files laying in NAS, accessing it by NFS mount and one day NAS hunged completely, every I/O call to it was lasting forever. Even 'ls /mount-point-of-nas' was just doing nothing forever until Ctrl-C. In my case I've applied poweroff-poweron cycle to NAS, and everything went right in minutes, just as NAS booted. And after it I wondered, what if my server was written in Go, instead of Erlang...

And, BTW, you can never be sure, that underlying libraries of your code are safe to use.


Correct me if I'm wrong, but describing core.async as "a library" isn't perfect in the context of a golang discussion. Doesn't the `go` macro rewrite the abstract syntax tree / JVM bytecode to make e.g. the `!<` macro co-operate with the channel?

https://github.com/clojure/core.async/blob/master/src/main/c...

That's not something that could be done with golang as far as I know.


I'm not sure to understand your point. Could your clarify?


`hackcasual` was saying that certain language features can be added as a library rather than needing to be integrated into the core language

> "fundamentally it's functionality other languages can provide via library support"

You were saying that CSP can be added as a library, citing Clojure's core.async.

All I was saying was that the way in which core.async was implemented doesn't feel like a great example of a 'library' in the sense that most people would understand in the context of a discussion about Golang.

Golang is a static, compiled-to-machine-code language without macros (in the LISP or C sense) or homoiconicity. The reason core.async can be implemented as a library in Clojure is that it has these things.

If you're talking about adding CSP to a language just by adding a library and without having to get into the internals of the language, core.async isn't a good example.

Again, happy to be corrected.


Well core.async is a Clojure library so it uses features available in Clojure. I don't see how it would affect the fact that it is a library.

I've also linked to js-csp, a JS library obviously not implemented using macros.

I can also find other examples of implementation as libraries, but I have no experience with them:

- Scala: https://github.com/rssh/scala-gopher

- F#: https://github.com/Hopac/Hopac

- C++: http://www.cs.kent.ac.uk/projects/ofa/c++csp/


Wasn't saying it can't be done! Just nit-picking at the particular example wrt to golang.


Ok, I've got it now :)


It's also a perfect example of the limitations of that approach - core.async had to make serious compromises in its interface because it was 'just a library': expressions with <!'s and other calls can't just be pulled into functions or for-comprehensions like normal code. That's not to say it's poorly-done, or not useful - it is well done and useful, and those compromises are in line with clojure's goal of integrating well with host vms. It's just an example of how builtins can be "simpler" sometimes. https://github.com/clojure/core.async/wiki/Go-Block-Best-Pra...


> +1 on this. Clojure's core.async[0] is the perfect example of an implementation of CSP as a library.

With the added convenience that shared mutability is pretty much nonexistent.


I tried making a CSP library for C. It was not very pleasant. At best you end up with something slightly significantly less safe than POSIX threads, but now with message passing. While C is an extreme example it's certainly not true that all languages can add CSP/actors/whatever in an appetizing form through a library.

Anyway, there's a reason the phrase "tacked-on" has such negative connotations.


What's your take on libmill?


They're fine concepts, but... On the little project I was tasked with using Go with at Google I got slapped down by the readability reviewers for using them. I think this is an interesting construct, but not sure that community really knows how to use them well?

Erlang at least is consistent on this -- it has that hammer well tuned and isn't afraid to pound nails with it.


Mmmm I'm puzzled. I don't think we are consulting the same community... Goroutines are everywhere in all the main go projects. Goroutines and channels are one of the main reasons Go exists.


"Readability reviews" at Google are somewhat notorious for imposing fairly arbitrary style choices extremely rigidly, for instance 80 characters per line (woe betide you if even one line in a 5000 line patch is 81 characters...). It doesn't sound so surprising to me that the people behind such a process might have decided that one of Go's primary selling points is 'confusing', given that the Go authors appear to believe their colleagues can't handle a brilliant language!


It's quite possible. After my experience with Go code reviews internally at Google, though, I am not eager to go back. I'd work on a project if I was paid to do it, but I wouldn't start one or advocate for it.


He said at Google not the community. I wouldn't doubt there is a difference.


I'm pretty sure at Google they know why they created Go, why they're using Go, and so on... The assumption that Google reviewers had something against goroutine puzzles me a bit. Maybe the problem was not about goroutines, but about how they were used... I don't know, I'm guessing...


Yea, not having OTP to create proper structure, instead having goroutines and channels created and destroyed all over the place, really hurts readability.

But OTP isn't really possible either because Go lacks links and monitors.

Plus not being asynchronous and no distribution (wouldn't want to go over the network with sync channels anyway)...


I think the way Go addresses concurrency is simple and straight-forward. It's trivial to write concurrent applications. If your target is writing semi-low-level infrastructure software I think Go is a great choice. It's definitely got a lot of fans in the world of people writing software for DevOpsy type applications.

From a personal standpoint, it's missing a lot of the features I like, namely ADTs, list comprehensions, folds, maps, etc. But that's just my personal style, not something wrong with Go. Go programs don't necessarily always look pretty but you can usually understand them after a minimal amount of study because the language is so simple.


> I think the way Go addresses concurrency is simple and straight-forward. It's trivial to write concurrent applications.

Yes, CSP[0] is a very interesting concept. But it's not something unique to the go language.

[0] https://en.wikipedia.org/wiki/Communicating_sequential_proce...


"I think the way Go addresses concurrency is simple and straight-forward. It's trivial to write concurrent applications."

Except when it is not. Message passing style of concurrency is just a dual of the classical blocking concurrency with critical sections, mutexes, monitors and conditional variables. An actor is a dual of a critical section. Actor's mailbox is a dual of a mutex. Sending/receiving messages is a dual of wait/notify. With any complex CSP program you can have all the same problems: race conditions, starvation, deadlocks (livelocks) etc.


Goroutines are for all practical purposes threads. Threaded code is generally thought to be difficult to write correctly, in ways that can't be solved just by making the threads cheaper to spin up.

Queues ("channels") are a good way to limit complexity of threaded code by treating each process as an agent. Besides some syntactical sugar Go doesn't really support this better than most other languages with threading, like Java.

Go has a weird thing going on where channels are sometimes used as a kind-of-replacement for iterators, which is error-prone since the "obvious" way to do it doesn't allow the consumer to stop the generator without a side channel. This can lead to buggy code that leaks goroutines, since goroutines can not be garbage collected.

One of the best ways to reduce the complexity of threaded code is immutability - preventing race conditions by making sure a structure never changes while being read. Curiously, Go has no way to mark an object as immutable, and does nothing to detect or prevent objects being unsafely accessed from different threads.


> Queues ("channels") are a good way to limit complexity of threaded code by treating each process as an agent. Besides some syntactical sugar Go doesn't really support this better than most other languages with threading, like Java.

The magic of channels comes with select{}. Considering them to be only threadsafe queues is really missing out.

Supporting select{} in other languages is possible, but difficult and rare.


Also, by default channels are zero length and block. What is a zero length queue in other languages? Doesn't even make sense!

Zero length channels are a key part of coordinating concurrent threads in Go.


I've written and maintain various Go systems and agree. Go is like Rails was about 5 years ago. It ends up hurting the community members themselves in the end. I won't abandon the language because it works well for my use case, but I'm not pouring any open source efforts into it.


But Ruby is at least syntactically very powerful. Go feels like it was written to have fast parser but this they killed with the latest version.


Sorry, I meant the Rails community not Ruby or Rails itself. :)


> My impression of the Go community (both within Google and outside of it) is that there is a very ... moralistic? .... sense of "You don't really need that, we know best" going on.

That's by design. Rob Pike fostered that "we know best" opinionated style in the community from the very first Go announcements and tutorials. I encourage you to read the design documents if you haven't: they throw light on the majority of decisions behind the language.

It's not really for me, but I understand and generally respect where they're coming from.

----

> If you want to know how to handle some new layout situation, run gofmt; if the answer doesn't seem right, fix the program (or file a bug), don't work around it.

http://web.archive.org/web/20091113154825/http://golang.org/...

> If it bothers you that Go is missing feature X, please forgive us and investigate the features that Go does have. You might find that they compensate in interesting ways for the lack of X.

> More directly, the program gofmt is a pretty-printer whose purpose is to enforce layout rules; it replaces the usual compendium of do's and don'ts that allows interpretation.

> Go doesn't provide assertions. They are undeniably convenient, but our experience has been that programmers use them as a crutch to avoid thinking about proper error handling and reporting

http://web.archive.org/web/20091114043443/http://golang.org/...

> Orthogonality makes it easier to understand what happens when things combine.

> By their very nature, exceptions span functions and perhaps even goroutines; they have wide-ranging implications. ... It would be nice to find a design that allows them to be truly exceptional without encouraging common errors to turn into special control flow that requires every programmer to compensate.

> Generics may well be added at some point. We don't feel an urgency for them, although we understand some programmers do. ... This remains an open issue.

> Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice. Matching only by name and requiring consistency in the types was a major simplifying decision in Go's type system.

> The convenience of automatic conversion between numeric types in C is outweighed by the confusion it causes. When is an expression unsigned? How big is the value? Does it overflow? Is the result portable, independent of the machine on which it executes?

http://web.archive.org/web/20091113154906/http://golang.org/...

> It is better to forgo convenience for safety and dependability

http://commandcenter.blogspot.mx/2012/06/less-is-exponential...


that's kind of what happens when you take some of the brightest minds that hacked on Plan9 and Inferno for years and have them build a programming language. It's going to be unique. Idealistic. Romantic.

reading many of the points here makes me think: alienated java user who doesn't like change


No, not at all. I want more change. Go is a bizarre mix of very conservative mixed with some modern concepts.

We all know Rob Pike is smart. Doesn't mean I have to agree with him. I don't like the things that Go is conservative about. I think the lack of a good way of doing clear type safe operations in a statically typed language is a terrible oversight.


> alienated java user who doesn't like change

Someone citing Haskell or Rust is probably the exact opposite of the stereotypical "Java drone," line-programmer churning away on bad features in the same fossilized Spring app for 10 years.


How do you get "alienated java user who doesn't like change" from a post whose author is obviously well versed in Haskell, Rust and Go?


Please let's not make personal attacks.


I think the word you were looking for was "regression" instead of "change"


> While many of these points (on generics especially) are completely legitimate, this article will fall on deaf ears. My impression of the Go community (both within Google and outside of it) is that there is a very ... moralistic? .... sense of "You don't really need that, we know best" going on.

And this encapsulates perfectly why I don't like Go. Writing Go feels like putting a straightjacket on myself. It seriously feels uncomfortable, which says a lot considering I come from a Python background, and I'm used to a language that tries to enforce a particular philosophy. In a lot of ways, Python's "there should be one and only one obvious way to do it" feels like working with pre-established harmony, while Go just tries to force its own arbitrary discipline on me.

If I'm going to use an AOT language, I'd honestly rather have the flexibility offered by D or Nim. And if I don't have to use an AOT language for something -- and Go is mostly being marketed as an alternative to non-AOT languages like Python and Java despite being AOT itself -- then I'd add Python and Perl 6 as languages I'd rather work with than Go.


This sort of gratuitous takedown is unfortunately crack for HN -- pages and pages of "here's how this popular thing is not like this other thing I like", without any thought given to why things are they way they are. Go is missing a lot of my pet features too but I know its authors are smart so I don't just immediately jump to assuming they don't know what they're doing.

Thought experiment: write a proposal that works through adding algebraic data types (or even just special-case the error handling as option types, if that is easier) to Go. I've tried it; I found that doing so brings up a bunch of other problems that don't make it an obvious solution. (E.g. you'll want a "match" operator. And then that means you need all statements work as expressions. And you'll have to change how zero values work, which are pervasive throughout the language.) And I really like algebraic data types in Haskell.

At some point if you really want Haskell you should just use Haskell. Or Rust. And then you will find out that those languages have problems too, and you will understand that engineering is a question of tradeoffs, not of feature checklists like this blog post.


While a proposal would be better than a takedown certainly, I do think this kind of language advocacy has value. People don't generally get to choose the languages they work in, the industry chooses for them. So if you don't want to work in a language you dislike, writing things like this is necessary.

I tried a project in Go and really disliked it. For my personal projects, I will not start another one using it. But there are already situations where I have to write Go if I want to do my job. If I don't want that set of situations to grow I have to speak up.


> E.g. you'll want a "match" operator. And then that means you need all statements work as expressions. And you'll have to change how zero values work, which are pervasive throughout the language.

You don't need to make everything an expression for pattern matching to work. See Bjarne's C++ proposal: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n344...

You might have to change zero values, although you could make algebraic data types all be nullable if you wanted to avoid doing that.


Thanks (as always) for your informed comments!

I agree that you can make pattern matching work without expressions. My intuition is rather that it's not especially useful, because you need some way to make use of the result of the match.

Either you embed the rest of the function into the branch of the match statement, or you're back to stuff like:

    foo := ... # Some zero value ...perhaps nil?
    match get_foo() {
      Some(x) => foo = x
      None => return
    }
    # now use foo here
That is, to make use of the result of the pattern match you need a way to get the value out of the pattern match which puts you back in the kind of code where there's no pattern match. You could make just "match" be an expression but now the arms of your match must be expressions which runs again into the problem of Go being a statement-oriented language -- for example, you might want to construct a struct in your match arm but if you can't fit the struct construction into a single expression you're stuck again. (It's a similar problem to Python's lambda.)

There might be some other nice way to make this work, of course! All I am suggesting that if one does the effort of making a concrete proposal you'll find that any small feature like this brings in a bunch of related ideas (like Rust's semicolon) and is not as simple as "just add option types".


This article looks to me like a list of personal preferences presented as facts. There's much more depth to language design than what the author presents - a checklist of must-have features that are "obviously better".


> (E.g. you'll want a "match" operator. And then that means you need all statements work as expressions. And you'll have to change how zero values work, which are pervasive throughout the language.)

To me, that just sounds like the designers really painted themselves into a corner. I have yet to run into a situation where everything-is-an-expression feels like a problem. Am I missing something?

> At some point if you really want Haskell you should just use Haskell. Or Rust. And then you will find out that those languages have problems too, and you will understand that engineering is a question of tradeoffs, not of feature checklists like this blog post.

I think it's clear that Go has contributed to the conversation. In really accessible ways, it made some powerful points on the benefits of auto-formatting, fast compilation, static binaries, and so forth. But there's just so much missing of the highly productive points made by other languages, and there seems to be so little energy in the community to progress on those points. In that sense, it feels to me like Go is a bit of a dead end.


Did you have a chance to look at Kotlin?

It manages to be pretty FP-friendly without being FP-front-and-center and keeping intreoperability with Java as easy as possible, and staying pretty simple.

But yes, you need to rethink the design from the start. I wish it would have been done differently, less C-like, but it hadn't.


We don't want Haskell. We don't want Rust. We don't want any of the current top twenty. We want Go-ish. We want the wonderful stuff that Go does have, but we want a few more high level features too.

I suppose once that language comes along we can stop our constant whining ;-)


"Go does not support operator overloading or keyword extensibility."

Very much working as intended, I believe. Experience from languages that support those features has shown that what we gain in the very few situations where those extensions make sense (such as defining mathematical operations on vectors using the same symbols that are used in vector mathematics), we lose in too many developers thinking they have a clever shortcut that an existing operator would be perfect for, to the detriment of readability and comprehensibility.

This is also the era of code-analysis-by-search-engine, and operator overloading harms that feature significantly. If I need to find all instances of vector addition in my code and I'm searching for '+', I'm going to have a bad time.


This is a well-written objection and I completely agree. Operator overloading is consistently one of the worst ideas I see in programming languages, C++ being the prime offender. Programmers think they're being clever when they write crap like

  boostfs::path path("/some/path");
  path /= "yourfile.txt";
Meanwhile, reading your code without being familiar with boostfs::path, my brain grinds to a halt while I try to understand what in the hell dividing by a string is supposed to do.

This is one of those "well intentioned" features that turns into a quagmire in practice.


> Operator overloading is consistently one of the worst ideas I see in programming languages

Except when not having it is the worst. Working with BigDecimal in Java is hell because it does not have operative overloading, meaning while you avoid

    boostfs::path path("/some/path");
    path /= "yourfile.txt";
you get saddled with bullshit like

    x.add(x.add(ONE).pow(2).subtract(x))


Now this is ugly:

    x.add(x.add(ONE).pow(2).subtract(x))
Especially because sometimes it is difficult to decide whether you should have x.foo(y) or y.foo(x). However,

    add(x, sub(pow(add(x, ONE), 2), x))
Is perfectly acceptable in my opinion (add some indentation if it's difficult to read). As long as you can pass parameters and return values without tricks (ie. passing pointers), this is quite nice. You should even be able to do this in Java with "import static". Or C++ with function overloading.

I'm definitely not a fan of C++ -style operator overloading, where you only have a finite amount of operators with set precedence/associativity and then they get overloaded to meanings that the symbols don't convey. On the other hand, I'm not sure what to think of Haskell either (where you can define arbitrary operators, like >>=, <+> or <$>). It's not as limited as C++, but there are some quite nasty examples that have gone overboard with operators.

Overall, it seems like operator overloading adds a lot of complexity to a language but the benefit is arguable.


However

I comletelly agree with this, it is very confusing API.

You should even be able to do this in Java with "import static"

Yep, you should define a static methods add, sub, pow etc with BigDecimal parameters and then use import static.

Something along

class BigDecimalMath { public static BigDecimal add(BigDecimal x1, BigDecimal x2) { return x1.add(x2); } }

Also JVM JIT would very likely inline such code, so you should not really worry about an added method call.


Too bad you can't just call functional interfaces in Java 8, or you could just do something like this:

BiFunction<BigDecimal, BigDecimal, BigDecimal> add = BigDecimal::add; add(x, y);

Unfortunately you have to do:

add.apply(x, y);

Which is kind of cumbersome.


Yes, it pretty much defeats the point. It would work if they had implemented a default method for an interface.

Unfortunately they brought something completely different in the name of Default Methods, and that was in my opinion not needed.


Eh, there's already no language symbols for exponentiation (unless you're really going to overload XOR in which case you're already part of the problem).


    func SameSideOfTriangle<N>(p0 Point2D<N>, p1 Point2D<N>, a Point2D<N>, b Point2D<N>) bool where N: Number {
        ba := Point2D(b.x.Sub(a.x), b.y.Sub(a.y))
        p0a := Point2D(p0.x.Sub(a.x), p0.y.Sub(a.y))
        p1a := Point2D(p1.x.Sub(a.x), p1.y.Sub(a.y))
        cp0 := ba.x.Mul(p0a.y).Sub(p0a.x.Mul(ba.y))
        cp1 := ba.x.Mul(p1a.y).Sub(p1a.x.Mul(ba.y))
        dot := cp0.x.Mul(cp1.x).Add(cp0.y.Mul(cp1.y))
        return dot >= 0
    }
Versus:

    func SameSideOfTriangle<N>(p0 Point2D<N>, p1 Point2D<N>, a Point2D<N>, b Point2D<N>) bool where N: Number {
        ba := Point2D(b.x - a.x, b.y - a.y)
        p0a := Point2D(p0.x - a.x, p0.y - a.y)
        p1a := Point2D(p1.x - a.x, p1.y - a.y)
        cp0 := ba.x * p0a.y - p0a.x * ba.y
        cp1 := ba.x * p1a.y - p1a.x * ba.y
        dot := cp0.x * cp1.x + cp0.y * cp1.y
        return dot >= 0
    }


I didn't mean to imply that it is never useful, just that it's not worth the baggage in my experience.

Genuinely asking: Why does this function need to be generic over Numbers? Couldn't you implement it once or twice for whatever actually-numeric types you need? How many different Point2D template instantiations do you actually have?


In my project, i32, u32, f32, and f64. Possibly others.

And that's only a small function (only one part of what's needed to check point-triangle intersection!) Copying and pasting e.g. Sutherland-Hodgman clipping or 4D/5D matrix math would get unsustainable quickly.


My experience from graphics programming is that, yes, this is a subdomain where you want both the ability to shorten your functions with overloading and the ability to make your underlying types dynamic. There's a lot of fiddly-bit optimization in that space that is both necessary and much easier to do if you can transition quickly and smoothly from one type to another without having to boil an ocean of declarations.

Granted, you then have the problem of having to deal with overflow on different types, but that's just one of the ocean liner of problems you've signed up for if you're working in the graphics space. May the odds be ever in your favor. ;)


`.pow` is the least problematic part of my example by a fairly long shot. Even keeping it, with operator overloading you'd get

    x + ((x + ONE).pow(2) - x)
or

    x + (pow(x + ONE) - x)


Yes, boost has a couple of instances of eyebrow-raising uses of operator overloading. Including stream operators, that still does not prove that operator overloading is an anti-feature.

On the contrary, when all you have is a couple of examples that aren't even that bad and a slippery slope argument, you are on very thin ground, rhetorically speaking.


It's clearly a balancing-act thing. My scales tip to the hater side.


I am certain that if you did some game programming in a language without OO, you would quickly change your mind.


It's funny. I know very little about C++ and my first thought was that this was, essentially, a shortcut for something like:

    path += '/' + 'yourfile.txt';
It's also entirely possible that I'd already subconsciously read ahead and so knew that it couldn't have possibly been division.


It's easy in a little two-line snippet discovered in the context of a discussion about operator overloading. It's much less easy to understand when grepping for a config file name (which is where I found it) where you don't even have the declaration of the variable to give you the context that you're dealing with anything other than char*.


>> Experience from languages that support those features has shown that what we gain in the very few situations where those extensions make sense

Nope, this is just incorrect. The C++ is completely dominant in large swaths of the software industry largely because operator overloading allows the writing of generic algorithms (which allows you to write large scale software without losing C-like performance). Without operator overloading, it is much harder to write a function that can be specialized on types that weren't specifically designed for such use.

In Python, Numpy, the Decimal class, I could really give examples for days of cases where operator overloading is essential. Go doesn't have it, Go doesn't have a lot of things, and Go will always be an also-ran language that isn't adopted outside of a very narrow domain.

The fact that built in types are 'special' and only they can support operators such as [] is enough for me to avoid the language. The complete lack of generic programming is more than enough.


Could you go into more detail about why operator overloading enables generic functions in a way simple functions don't? I don't see it.


Without operator overloading, it's hard to make generic numeric algorithms palatable.

Compare (fake Go-with-Swift-generics syntax):

    func Distance<N>(x N, y N) N where N Number {
        return x.Mul(x).Add(y.Mul(y)).Sqrt()
    }
Versus:

    func Distance<N>(x N, y N) N where N Number {
        return Sqrt(x * x + y * y)
    }
If your reaction is "well, Distance doesn't look too bad like that", I can replace it with matrix multiplication or point-inside-triangle testing. Not having overloading quickly fails to scale.


Of course, you frequently don't want operator overloading for matrix multiplication. Avoiding allocations is a big deal for speed, and so c.Mul(a,b) is often the right answer even with operator overloading.


I'm not aware of a language that allows operator overloading for matrix operations, but denies you access to c.Mul(a,b) or the equivalent when you need it. I find operator overloading pretty crucial for prototyping this sort of algorithm even if I'm later going to refactor it for performance.

edit: rereading the earlier comments, this may be orthogonal to the GP's claim that operator overloading is crucial for generics. Ooops. :)


How does allocation relate to operator overloading?


It gives you more explicit control over the storage being used for expression evaluation.


Can you give an example? I don't understand.


Wouldn't breaking it down into components instead of using a one liner be an appropriate response for readability's sake?


For the distance formula? That's about the simplest graphics routine in the world. If you can't readably write distance without temporaries, the language isn't really usable for (generic) graphics programming. Replace distance with bilerp or point-triangle intersection tests (as I did in a sibling comment) and you'll see what I mean.

(It's totally fine for a language to be not interested in that domain. But that doesn't mean operator overloading is bad. Overloaded operators are essential for some domains.)


Actually your distance implementation is a good example of needless stuffing of expressions into a single function.

Many times you don't care for the (costly) square root, so a distance-squared function can be useful.

Multiplying x by itself ("squaring") can also be a useful function that is used a lot.

    (defun distance (x y) (sqrt (distance-squared x y)))
    (defun distance-squared (x y) (+ (square x) (square y)))
    (defun square (x) (* x x))
In the same way that we can decompose our code, we can also decompose the concept of "operator overloading": it gives you is the ability to use one-letter (1), fixed-arity and precedence-following (P), infix (I), operators for your own or someone else's operations (G).

In languages that support 1PI properties, you'd often overlook such decompositions because it's quick and easy to write sqrt(x * x + y * y). To read it, also, but then you find yourself doing more and more complex calculations in-line. Reading suffers. You may end up with something that's worse than the corresponding code in a language that encourages defining small functions instead. (Lisp, of course, lets you use any combination of these properties, but the latter style is the one normally used.)

Yes, this is a Go thread... but I'll leave it here anyway.


That's true. We had to write our own (generic) GLM library in university. Definitely wouldn't want to be doing that in Go.


Because you would have to come up with a naming convention ahead of time.


And you're suggesting that nowhere in what used to be called the C++ Standard Template Library is there a place where they didn't have to come up with a naming convention for functions?

I think you may have the causality swapped here; I think <algorithm> looks the way it does because they limited themselves to things that could be easily expressed with operators --- for a long time, to the detriment of the language; see: STL associative containers, operator<, and the longstanding lack of a standard hash table.


How was <algorithm> preventing the introduction of a hash table? The hash function is a template parameter, predefined for standard types, no operator needed.

Josuttis claims that hash tables didn't make it in C++98 due to lack of time.


I think we're talking about different things. Can you give an example of how operator overloading improves one's ability to write a function that can be specialized on types that weren't specifically designed for such use?


Type 1 defines: .add(x)

Type 2 defines: .plus(x)

Type 3 defines: .vector_add(x)

Now, implement a function 'average' that can work on any of these three types.

If you can get everyone in the world to agree to a convention of how to express 'addition', then there is no difference, except we already have a convention for 500 years, and it is the '+' operator. Why you think the '+' operator is confusing but .plus() is not, that is what confuses me.

Or how about 'minimum'. In C people end up writing a minimum C macro, because there's not even a way to write one function that works on int, long, float! What a sad world that is, where you have to meta program to implement min(x,y).


Operator overloading is often problematic because the operators come with semantic baggage, such as properties they maintain, and implementations of those operators often don't maintain those properties. For instance, addition is associative, and has no additional side effects beyond the value it produces, but an overloaded operator won't necessarily implement those. Abstractly there's nothing wrong with that, but in reality it has proved difficult. Operators also often come with an order-of-operation hierarchy designed for mathematical operations that map poorly or confusingly to what the overloaded operator is doing, which causes further practical mismatch.

Operator overloading works best when being used in domains where the original constraints hold; for instance, adding a matrix to a matrix is mostly the same as adding two numbers, though if your type system can't enforce that the matrices are the same size at compile time, you still added the ability for + to throw an exception, which it will never do with ints. Operator overloading got its bad name from cases where people were overloading the operators to do something entirely unlike what the original operator did, causing a mismatch between the user's expectations and what it actually did and therefore bugs.

Operator overloading isn't really "right" or "wrong" per se, but it's probably a bad idea for anything that isn't able to fully implement the contract of the operator, including "associativity", "no side effects", whether exceptions can be thrown, etc.

If you read carefully, you'll generally see operator overloading arguments have two groups talking past each other, one cursing things like C++ streams that basically overloaded the operators in a meaningless way for nominal convenience that causes a lot of long-term headaches, and the other praising the benefits of overloading for math, since math is the big case where it works correctly.

Back on topic, Go correctly does not have operator overloading because Go's authors, as near as I can tell, have no intention of Go being good for mathematics.


> Operator overloading isn't really "right" or "wrong" per se, but it's probably a bad idea for anything that isn't able to fully implement the contract of the operator, including "associativity", "no side effects", whether exceptions can be thrown, etc.

Operators are not special here. What you are saying is basically "don't claim to implement the interface if you didn't implement it".

This exact problem exists widely in, for example, Java. How often do you see a Java object where someone overrode equals() but not hashCode()? I've seen that problem vastly more often than anyone doing something crazy with operators.


> you still added the ability for + to throw an exception, which it will never do with ints.

Not quite true, the language could implement checked over- or under-flows (I believe Swift does)


"Not quite true, the language could implement checked over- or under-flows (I believe Swift does)"

I am one of the apparently about ten people who thinks that should be the universal default. The vast bulk of people disagree, and I was trying not to poke the sleeping dog. :)


> I am one of the apparently about ten people who thinks that should be the universal default.

There's a handful of us, a handful!

FWIW Rust checks for overflow in debug mode, and while that's elided by default when compiling with optimisations it can be re-enabled with a -Z flag:

    > rustc test.rs
    > ./test
    thread '<main>' panicked at 'arithmetic operation overflowed', test.rs:4
    > rustc -O test.rs
    > ./test
    Overflowed!
    > rustc -O -Z force-overflow-checks=on test.rs
    > ./test
    thread '<main>' panicked at 'arithmetic operation overflowed', test.rs:4
And of course some languages bypass the whole thing by automatically promoting to dynamically sized integrals.


Well, that's that then; now I just need a Rust project and I'll start learning it. (I'm only an early adopter of languages, not a bleeding-edge adopter.)


"everything is floats" is also technically a solution, but with a curious fail-state.


Ah, I see what you mean. You can see this issue in action in Go's math package, where "Min" is defined on only float64.

I was lumping this in under the general category of the "Go has no generics" issue; without generics, Go can't do this anyway (and the closest solution you would have would be to define an interface Algebraic that specifies functions algebraic types need to support, then implement operations like min and average atop those interfaces). I'm still of the personal opinion that (as other commenters noted) what you gain in being able to define operator+ you lose in boost developers getting clever and implementing operator/= on paths; even if we had generics, I'd personally find a Go-like solution of declaring the function package an algebraic type had to support (via an Interface) preferable.


I think that if you begin with the idea that you have multiple conventions and you cannot avoid that, therefore you won't solve the problem with "+" operator. You have just one more convention. Type 1 defines: .add(x), Type 2 defines: .plus(x), Type 3 defines: operator+, ... If you assume that you can convince people to adopt a convention, you can use .add(x) and avoid the problem in the first place. Go for example tries to have always one obvious way to write things. The Go standard library it's the idiomatic Go bible.


> Now, implement a function 'average' that can work on any of these three types.

I'm not convinced this is such a large problem that its solution is worth the myriad downsides that operator overloading is chained to. There's lots of schoolbook examples like this, but I've almost never seen operator overloading used well in practice. There's a few examples, like boost shared pointers are somewhat easier to read, but they're few and far between


> There's lots of schoolbook examples like this, but I've almost never seen operator overloading used well in practice.

I do graphics programming and I literally rely on it all the time.


As someone above was mentioning, I don't think that go was ever intended for heavy math programming. It's a great language for writing network applications that is more performant and deployable than scripting languages, yet easier to use than low level languages. Use rust or c.


Please elaborate on the myriad of downsides that operator overloading is chained to. I am honestly not aware of any downsides that are unique to operator overloading

Hint: Things like "they can be abused" or "you can do crazy things like have + return a dot product" are not unique to operators. I can very easily define .clone() in Java to return a dot product as well, or have .equals() do in-place addition.


At this point, operator overloading would simply be another convention, would it not? Your type would not work with mine if I used .add instead. You did not solve the problem of having to get everyone to agree on a convention.


operator overloading is just overriding a base class (interface), plus a special and aribtrary limited syntax, plus compiler optimizations.


> If I need to find all instances of vector addition in my code and I'm searching for '+', I'm going to have a bad time.

This is true of methods too. If you're searching for vector addition and you grep for "Add()", you're also going to have a bad time. To have a reliable code indexing scheme, you need typechecking/name resolution information, and once you have that you can easily handle operator overloading as well.


The difference is that "Add" is not a term that many search engines will drop on the floor. When you get out of the range of plain ASCII strings, you're leaving the range of symbols you can assume your tool of choice will be indexing into search for you.

This is likely a short-term reality, but it's the current reality.


How many code indexing tools are in use that aren't grep/ack/ag (because you can grep for +) and don't do semantic analysis? I can't think of any. Visual Studio, DXR (what I've used), all documentation tools, etc. can all handle overloaded operators, and have been able to for years. (Hasn't Visual Studio been able to index overloaded operators for, like, at least a decade?)


github.com


That's fair.

Though I've always found GitHub's code search to be less than useful in general. I think it doesn't really make sense to not have overloading because GitHub doesn't support semantic indexing in 2015, especially since you can easily just not use GitHub for code search.


GitHub search breaks on really simple things, too:

- Queries <= 3 characters in length

- Certain common characters are totally not supported in queries - quotes, etc.

- Forks cannot be searched.

I always just clone and use a proper search tool if I know what I am looking for is in a specific codebase.


The way Haskell and rust solve this is that they constrain the types of the overload. If you want to overload +, you implement the Add trait, which has to be a certain type. The Add trait is googleable, and it provides a hint as to how the operator is to be used. In practice Rust and Haskell have much less of a problem than C++ in this respect. Additionally, having a strong culture helps (which Go obviously does). Python has no restrictions on overloading, but it's much less abused than in C++ because there's a cultural aversion to making operators do crazy stuff


I agree that operator overloading is probably a feature not wanted in a language with this target problem domain.

However "working as intended" I think is also the response of Go to their lack-of-generics, which I think is kind of crap.


About generics, this is highly debatable. This is a design choice, it's not a decision made by accident. You'll surely gonna have boilerplate code in some cases, but all the language will be a lot more readable, and simple. Simplicity it's the most wanted feature of Go, from the designers perspective, I think. In the long term it's preferable to have explicit and simple code, instead of complex magic. This is a correct view? We'll see. Honestly I'm starting to appreciate that. They may have a good point.


How is code with generics more complex than without it?


For example the C++ templates rules are themselves a turing-complete language. If by generics you meant some really basic features, I think you can do pretty well with interfaces. They're already in the language. If by generics you meant the full-package, i think you could end up with something pretty complex all the time.


Most generic implementations are not Turing-complete, and looking at C++, this doesn't sound like a particularly tempting proposition.

And while I'm not particularly familiar with Go, I don't see how you can get a feature set equivalent to a simple implementation like Java's out of Go interfaces ("you can just cast" is not a good answer).


I'm probably missing the use case. What is the problem that you are thinking of, you can't solve with a Go interface?


Say, a generic container.


They're talking about the compiler and the language spec not Go code.


That's not how I interpret parent's comment.


> all the language will be a lot more readable, and simple. Simplicity it's the most wanted feature of Go

This is what Java designers thought as well. See where this has led them.


Without generics, you can never write a type safe datastructure. I can't imagine the hubris that thinks the language already contains all of the datastructures it needs.


This isn't true. You can write a type safe data structure. It is simply for one type; you can't share it. That inability to share is the part that generics fixes, but saying you can't write type safe structures is inaccurate.


[deleted]


> to keep the language spec and compiler simple

How is this not a design choice?


Yea, You are right, guess it's to early in the morning for me to be responding to others. =D


>I think is also the response of Go to their lack-of-generics, which I think is kind of crap.

The response I've overwhelmingly observed from qualified voices isn't "boo generics!" but "generics are terribly difficult to get right, and we want to know more about Go's niche before making any design decisions in that realm".

I suspect you're reading a lot of drivel from web programmers who discovered net/http last Thursday. It's hard not to be frustrated with the internet, I concede...


A huge variety of other languages have successfully implemented generics. The Go team seems to be looking for some mythical ideal solution with no tradeoffs, and will therefore never do it.


>A huge variety of other languages have successfully implemented generics

Completely irrelevant; the argument isn't "generics are hard [full stop]". The argument is "generics are hard" AND "we don't know Go's niche well enough to commit to any specific approach".


But how can they not know that, after so many years?


You'd have to ask them, but I suppose it takes years to carve out a niche.

What I can say is that I recently interviewed a Google engineer who was conducting a survey of Go's usage, so it seems they're at least studying the question.


> However "working as intended" I think is also the response of Go to their lack-of-generics, which I think is kind of crap.

Where the hell did you get that from?

https://news.ycombinator.com/item?id=9622417


While I think your point about using them improperly is good (I know I've seen some weird shit back in the day I did C++) I still think there is merit to allowing operator overloading. Many times you may create units that should work when added together but need unique logic to make it happen and that just can't happen in GO today (you have to make an add() method or something similar).

So I think operator overloading can be exploited pretty horribly but can also make code far more intuitive. It's certainly not something that can't be worked around but I always err on the side of letting it happen and the community can direct people to using them most properly.


Agreed. Explicit is better than implicit in this case (and most, if not all, cases).


Operators are no more or less explicit than functions or methods. "+" and "Add" are just as explicit.

A difference is that operators tend to have semantic baggage, which can be a huge boon when using operator overloading for e.g. calculations. C++ has demonstrated that it was a very bad idea to overload operator against their semantic baggage, but the lesson to draw from it is "don't do that" not "operator overloading is the devil".

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: