Hacker News new | past | comments | ask | show | jobs | submit login
Why Go and Not Rust? (kristoff.it)
471 points by kristoff_it 35 days ago | hide | past | web | favorite | 477 comments



Not disagreeing with the sentiment, but I dislike the word "simple" used to describe languages, because it has two meanings: simple = "consists of few parts" and simple = "easy to use", and these are not the same.

Certain things in Go are not easy (not simple), because Go has few language features (is simple).

Go has definitely a lower barrier to entry and is very productive for certain size and complexity of a project. However, as you start pushing beyond "scale" Go is designed for, Go becomes less simple to use. To appreciate things like generics, non-nullable types, no shared mutable state, you have to be working on a problem that requires these things, and only then these extra features pay off and make development and maintenance easier.


Whenever I used the word `simple` I did it not to mean easy. You raise a good point and I agree that simplicity != ease. I think it's a well known point of discussion from rob pike and, although less authoritative, it's also the focus of my first ever blog post [1] :)

That said, I wanted to talk about Go for enterprise development, and I did explicitly made the point that some abstractions are in my opinion detrimental to the reality of enterprise software, and I suspect generics might be one of them. It's a big topic and I honestly don't know the definitive answer, but I just wanted to point that out. I do agree that non-nullable types are good and a big hole in Go's repertoire.

[1] https://kristoff.it/blog/simple-not-just-easy/


Have you gone back and read that post you linked recently?

It seems to me like the final section has a lot of bad things to say about the methodlogy of a go developer. Huge, bloated toolchains required to make progress. Code volume rather than appropriate abstraction. To me, your MS Word vs LaTeX graph sure seems like it's damning "easy" languages like Golang in favor of other languages with more powerful abstractions.

You also assert that Golang is easier to learn than other languages. I don't think you've really got a leg to stand on there. It's "easy" for folks doing service development because the arbitrary decisions made by Golang were made from the perspective of someone experienced at writing web services. If you already know C and Python a bit, Golang cherrypicks a lot of good stuff, but if you don't (or you're not writing a bunch of web services that essentially do nothing but punt to C frameworks and validate strings in request headers) then Go's going to struggle, and folks have pointed this out.

It's pretty surprising to me that you write about how Generics are the death of enterprise software when so much quite-usable software uses generics. Why do yo simply get to forget the existence of the huge body of Java work and instead assert it's bad? Android uses generics in many cases where appropriate and it's one of the better UI kids available these days.

Even the Actor methodology you're citing as part of Golang as good is actually a fairly sophisticated abstraction with lots of implications for the runtime and execution order. Why is that specific programming abstraction given a pass because of its benefits, but writing a generic linked list is going to be the death of your programming organization?

You also defend the structuring of many enterprise groups even as you suggest that they lack training, refuse to pay technical debt, and place unreasonable burdens on developers. You seem to have just accepted this and said Golang helps you be more complicit in this mode of operation that you also seem to suggest is somewhat bad. Why would we want to pander to a methodology that asks junior developers to proceed without training and places focus on process and hyper-specialized domain experts rather than clear communication, sound architecture and sustainable velocity?

I'm quite confused how you can hold both these opinions at the same time. It seems like they're contradictory.


> "easy" languages like Golang in favor of other languages with more powerful abstractions.

Go is easier to learn than C# (I'll omit Java in this comment as I have more experience with the former), but that doesn't make Go an "easy language". The name itself implies that mastery doesn't come immediately.

The powerful abstractions that you mention, including generics, are indeed good things, but pardon me but I suspect you've never seen how badly and easily they can be abused in certain environments. You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

> Generics are the death of enterprise software

That's an hyperbole, I've never made such strong statement.

>Even the Actor methodology you're citing as part of Golang as good is actually a fairly sophisticated abstraction with lots of implications for the runtime and execution order. Why is that specific programming abstraction given a pass because of its benefits, but writing a generic linked list is going to be the death of your programming organization?

Goroutines and channels are simpler than the threaded multiplexing that C# does with its stackless coroutines. My argument is mainly related to the many ways you can mess up async/await in C# vs the 2-3 ways you can deadlock in Go. I link to a talk, an image and a blog post related to the subject.

> You also defend the structuring of many enterprise groups even as you suggest that they lack training, refuse to pay technical debt, and place unreasonable burdens on developers.

I don't defend nor encourage this, but that's what happens in real life. It's the result of many factors at play, some of which I described in the post, some of which even I don't understand. If I did, I would be way more rich :)

You can refute my assessment of reality, but "being complicit" is not exactly appropriate when 80% of the enterprise consultancy jobs out there are like this.

> Why would we want to pander to a methodology that asks junior developers to proceed without training and places focus on process and hyper-specialized domain experts rather than clear communication, sound architecture and sustainable velocity?

You can't change how things work until you understand why the work the way they do (or at least you can't reliably change the world unless you do). Refusing the current state of things is step 0 of N, and I've learned in my experience that change in big systems comes incrementally, you can't "distrupt all the things", so Go is (in my opinion) a step in the right direction.

I would recommend you talk with somebody you know that has this type of work experience, they will be able to convey to you how these things work better than I can, probably.


> The powerful abstractions that you mention, including generics, are indeed good things, but pardon me but I suspect you've never seen how badly and easily they can be abused in certain environments. You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

Isn't that more of a criticism of enterprise software than generics? If they can write ugly generic code, they sure as hell can write ugly Go code as well. They're just different different programming styles, and while some like one others prefer the other. Even when you code in ASM, you'll probably end up writing generic code down the line. You'll just be managing it manually rather than have a compiler, preprocessor, or templating engine do it for you. It's a preference, and I say to each their own. Also, the whole moralizing holier-than-thou simplicity and anti-abstraction talk is getting kind of annoying, but then again, there's lots of moralizing people in the Rust camp who are just as annoying.


> Isn't that more of a criticism of enterprise software than generics?

Absolutely yes! My whole argument is restricted to (how I experienced) enterprise development.


Isn't this just post-hoc rationalization though?

People complain that Rustaceans are insistent to the point of being obnoxious about Rust. But I insist that Gophers turn post-hoc rationalization into an art form. Things are bad because Go didn't do them, if they were good Go would have.


> You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

I'm sorry, this is a terrible argument. That's definitely not a common usage of Generics.

The common enterprise OOP idiom that is mocked is NaturalNumbersFacadeFactoryFactory, which use no Generics at all, leading to a large number of classes.

Ironically, if one used generics to replace factories like you did in your example, you would be able to replace hundreds or thousands of Factory or FactoryFactory classes in an enterprise application with a single Factory<T> class. Of course, that would probably be pointless. There's a reason multiple factories exist in a program, even though there are ways to replace them with something simpler.

I get it that Go programmers don't want the language to be complex, but Generics themselves don't have to be complex. They solve a lot of problems in a simple way and are MUCH easier to use than ad-hoc generics made using interface{} + Reflection.


It definitely is common with generics, it happens all the time. Here's some production code I've worked on recently:

     new TypeReference<FooResponseCollectionResource<MemberUpdate>>(){})
I like generics, but I don't love this sort of thing and it's not even a mis-use of them


Many languages do support type alias. In C# you would do

    using NiceType = TypeReference<FooResponseCollectionResource<MemberUpdate>>
    // ....
    new NiceType ()


My point was that the commonly mocked VisitorObserverFacadeFactoryFactory Java idiom doesn't normally use generics at all.

Of course you can mix generics with terribly named classes and OOP patterns, but that's hardly a problem with generics themselves.


what's the alternative ? using an "untyped" variable ?

but then you loose type checking and some people don't like to loose type checking...


Do not use patterns that require you to write code like that.


You never have cases where you have a list of lists? Or do you introduce a pointless wrapper type for the inner lists to "hide" the generics?


That's the answer for me.

The problem in those cases always the abuse of often obsolete OOP patterns and misnamed classes.


Or use type aliases that are helpful.


> I'm sorry, this is a terrible argument. That's definitely not a common usage of Generics.

It is very common indeed in the three large Enterprise Java houses I've worked in over the past 15 years. FizzBuzzEnterpriseEdition is the reality in all three of those places. Not only is it reality, it is requirement.


FizzBuzzEnterpriseEdition doesn't implement any generic classes.

You're just making my point for me.


> Go is easier to learn than C# (I'll omit Java in this comment as I have more experience with the former), but that doesn't make Go an "easy language". The name itself implies that mastery doesn't come immediately.

I'm using "easy" in the sense of "easy" vs "simple". I think that the modern "commonly in-use" parts of C# are about the same size as Golang, to my sense of scale.

> , but pardon me but I suspect you've never seen how badly and easily they can be abused in certain environments. You've probably seen the Factory<Factory<NaturalNumbersFacade>> joke somewhere, real live enterprise software is sometime like this, but unironically.

Pardon me if I think it's profoundly disingenuous for you to conflate factory patterns (which can exist in literally any type system and language) with Generics, and let me again positively beg for pardoning if I come across thinking you don't really understand the Golang argument for generics if this is your go-to example.

> That's an hyperbole, I've never made such strong statement.

Perhaps, but you have lumped generics in with a group of features and said, "The Go community regards as anti-patterns many abstractions regularly employed by Java / C#" and essentially intimdated that Generics are almost never good.

Further, your rhetoric carefully partitions everyone else's abstractions as risky anti-patterns while below you carefully rationalize Go's abstractions as in fact good. Your argument there essentially boils down to taste and fear.

> Goroutines and channels are simpler than the threaded multiplexing that C# does with its stackless coroutines. My argument is mainly related to the many ways you can mess up async/await in C# vs the 2-3 ways you can deadlock in Go. I link to a talk, an image and a blog post related to the subject.

No, they're not universally so. Firstly, actors and threads define equivalent systems [0], they simply have different tradeoffs within that space. There are some constructs where actors are easier (e.g., when a process maps well to an individual loop consuming a mailbox) and some where they simply are not (e.g., when spinning over shared memory and hoping to pull out a copy to another space). What's more, there's an awful lot of progress on the shared space model by attacking the memory coherency problem.

It's quite possible to build systems that are as resistant to deadlock as Golang using threaded models. They're also amenable to static analysis. There's also a very large and useful body of research on using structures that are unopinionated about the order that they receive updates in (CRDTs are a good place to start here), making the strict linearization of actors unnecessary and even a performance bottleneck sometimes.

You're either unaware of it, or you're uninterested. I don't know, but if it is the former then you should probably keep up with what's going on there.

> I don't defend nor encourage this, but that's what happens in real life. It's the result of many factors at play, some of which I described in the post, some of which even I don't understand. If I did, I would be way more rich :)

You are encouraging it though. You're saying we should use tools that are designed to accommodate it. That's literally baking this mode of operation into our automation at a fundamental level. And as you've implied, once there it often takes monumental effort to get it dislodged.

> You can't change how things work until you understand why the work the way they do

That's my line. The way it works is people suggesting that there is no other way it could work, and then baking these assumptions deeply into their corporate structure.

> I would recommend you talk with somebody you know that has this type of work experience, they will be able to convey to you how these things work better than I can, probably.

Hi. I'm Dave. I'm a SRM at Google right now but I've also been a Director for Capital One, worked in software at numerous companies including Microsoft and about a dozen startups in technical and advisory capacities, and founded (and sold) my own startup. I've been in management in one capacity or another for nearly a decade.

And a lot of my time spent when I'm not working directly on projects is advocating for developers to be more empowered, receive more training, and have the power to actually set and run a sustainable pace, even if that means a slow start to burn away the technical debt in place.

But thanks, I'll keep that in mind.

[0]: "On the Duality of Operating System Structures", by Lauer & Needham http://web.cecs.pdx.edu/~walpole/class/cs533/papers/duality7...


> Your argument there essentially boils down to taste and fear.

I suspect the larger factor may simply be familiarity. It is very rare to see a balanced piece about some programming language - let alone one about two programming languages - where the author has an equal and substantial amount of experience with both and is really able to compare them on their merits.


You can mess up async/await in C# in exactly 0 ways. Calling .Result is just a bad practice and should never be done.


Uhh, I've messed it up a lot more ways than zero, and it's easy to do. In fact, doing the exact same thing in a console app and a Windows Forms app will result in a deadlock in one of them, for absolutely no obvious reason.

Async and await in C# may be wonderful to some, but they are horribly broken to others.


I agree there're caveats, I experienced that too.

I disagree they're broken.

C# 7.1 introduced async main. No reason to call Task.Result in console programs.

Even before that feature arrived, visual studio debugger could resolve that literally in a minute. Press F5, reproduce deadlock, you'll see exactly what's locked and why.


I also don't understand your argument on simplicity. You make mention in multiple points about "await". You don't like it. We get it. In what way is it not simple? And how does it follow that Go is more simple (using the same definition).


I actually quite like it. C# has a peculiar async runtime that when misused tends to deadlock, and that's the entirety of my issue with it... and the fact that async/await is a half baked monad in most languages, but I say that with love (mostly) :)


I almost completely agree with everything in the article except the word "Enterprise", it might just be me but it can mean a lot of things to say it's good for enterprises.

We could say it's good for Web and Services Enterprises (probably) but for a different kind of an enterprise it can vary, depending on their business model and requirements. As the significance of it's features can change.

Tldr; All languages have a place(IMHO), Go is a simpler language with reasonable value in the current industry for various reasons.


> Certain things in Go are not easy (not simple), because Go has few language features (is simple).

Too true, I touch on that in a post I made here as well. In Go I often found myself having to reuse logic in functions that, in Rust, could be hidden behind iterators or custom iterator implementations.

While the Rust version is definitely more complex if you look at the implementation of the iterator; in practice you are just using the iterator - and that ends up meaning the problem you're solving keeps locality, and is easier to reason about. In my experience, at least.


I wonder if it is actually more complex. Emergent complexity is a thing, and one only needs to look at Go (the game; pun definitely (not) intended). The rules are dead-simple; you can learn them in under a minute. But to actually play the game well, you have to learn a lot more. You have to memorize openings, learn to identify and utilize patterns, etc. There is a lot of "meta-play" going on, and you could say that high-level Go play is full of abstractions. In fact, you won't have a chance without the abstractions. If you only see individual stones rather than shapes, you're done for. I think the same applies to languages like Go. You don't have generics, but if you want to effectively re-use code, you need abstractions. The difference now is that in languages like Rust, you have predefined and marked abstractions, while in Go, you define them yourself and need to recognize them yourself. Granted, Go also makes certain kinds of abstractions difficult or impossible to manually define within the language, so you may end up with copy-pasted code, but I imagine even that will be manually abstracted through naming and such.

The Rust iterator is a pretty simple abstraction in my opinion. It's essentially a lazy list. You see an iterator used, you know what you're getting. You see the map function used, you know it's the same as a for loop over all the elements. A fold/reduce is just a for loop over all elements with a result variable that gets returned. I really don't think using iterators is any more complex than a for loop, and with a for loop, you actually have to look through it to identify the pattern used and find out what it does. I'd say having to manually check for patterns is more work, but I couldn't say which one is more complex; maybe neither is.


You can encapsulate iteration logic in Go.

I do it in my libraries when warranted.

The pattern is:

  v := NewIteratedValue()
  for v.Next() {
    item := v.Item
    // process item
  }
  if v.Err() != nil {
     // process error
  }
A variation of this pattern exists in standard library and third part libraries.

I know that you meant implementing some sort of iteration protocol natively supported by the language, so that you can write:

  for v { ... }
vs:

  for v.Next() { }
But you ended up making much stronger and untrue claim that one can't write re-usable iteration logic in Go.


That's an apples-to-oranges comparison because your iterator interface here isn't generic like Rust's. It isn't reusable.

Your iterator has to be specific to a set of types — "v.Item" in your example is always of some specific type. Using an untyped interface{} would be worse than any alternative because every single use would have to use type switches and casting; you lose out of compile-type type safety.

Because of this, even if you have a local iteration system, it isn't composable beyond your package and its types. Given this (simpler) interface:

  type Iterator interface {
    Next() (Foo, error)
  }
...then I can write a map function:

  func Map(it Iterator, mapper func(Foo) Foo)) Iterator
...but I cannot write a generic one that works with anything other than Foo. And if I need this:

  func MapFooToBar(it Iterator, mapper func(Foo) Bar)) Iterator
...then I have to write it just for that purpose.

That's why there's no "iterator library" for Go, like there is with Rust.


> But you ended up making much stronger and untrue claim that one can't write re-usable iteration logic in Go.

Not at all. Are you telling me that `v.Next()` can be written in a reusable way for different data structures? Nonsense.

`v.Next()` will have a single, bound return type. So at that point you're re-implementing iterators for anything you want to iterate on.

Furthermore, lets say I want to `v.Next().Map()`. How can I write a Map function in a reusable way? The entirety of your iterator chain would have to be custom implemented for the one data structure you're using. Or you throw your entire type information away and use `interface{}`.

With Go, your limit of reusability is hand writing the entire iterator implementation per type. Which is hardly reusable in my view. I touched on this when I mentioned creating separate functions for these patterns. That's all you're doing here - creating custom methods to make something look like, but not behave like an actual iterator. No Map support, no Filter, no .. anything. All you made was a loop, and a hand written one.

I'm not trying to be overly negative here. However saying Go has an iterator pattern is imo strongly misleading.


re: Simple So it took me a number of years running a tech agency to realize that the words "simple" and “easy” are bad. In my job I now correct my, and our project managers, from using the word "simple" to the word "straightforward". I find that the term simple connotes a feeling of the task/work being easy and thereby quick to get done. As an agency the term “simple” speaks not only to risk and ease of completion but also budget, as a function of time.

Moving 10,000 bricks from point A to B is likely simple or easy, but it takes time. If not careful a client can think “oh it’s just moving 10K bricks 5 feet over, they should automagically be able to do this since John said it was “simple” ”. What I intended to impart was that I know how to do it, no solutioning needed, we have the skills and resources to get it done, and we can probably get moving on this quickly. Now I say something more like … “Moving 10,000 bricks from A to B is straightforward for my team, when do you want us to get started”.


Rich Hickey describes simple as unbraided, like a class is identity, state and schema all braided together

And easy as close by and accessible i.e. npm i latest-framework might be easy but not simple

https://www.infoq.com/presentations/Simple-Made-Easy/


This presentation had an outsize influence on my professional development as a programmer. If I've watched it once (and I have), I've watched it a dozen times.

edit: The "Limits" slide (go to 12:30 in the vid) is one that I really internalized early on. And looking at it again years later, the principles from that slide absolutely guide my app development:

- We can only hope to make reliable those things we can understand

- We can only consider a few things at a time

- Intertwined things must be considered together

- Complexity undermines understanding


isn't the same idea exactly covered by the term "(de)coupled"?


It can include decoupling, but no it's not synonymous.


> However, as you start pushing beyond "scale" Go is designed for,

Go was designed at Google to be used internally at Google, so frankly it's hard to imagine commonly pushing past the "scale" Go was designed for.


I assume the GP meant "scale of codebase complexity" – which is orthogonal to whether something is "web scale" (ie. everything Google does).


Isn't the Google codebase infamously a monorepo with billions of loc? That seems like it fits "scale of codebase complexity" as well as "web scale"


It's a monolithic repo, but not a monolithic codebase. It contains many many separate projects, libraries, and applications. There's a lot of code reuse, but it's not like you have to wrap your head around the whole thing.


Yes, but much (if not most) of that codebase is in C++.


How do you check out a project like that?


Go was designed for Google-scale and Google-style logs analysis, for which it has been very successful. The fact that it comes with a decent HTTP and RPC server is the inevitable consequence of the fact that every process at that company is expected to present HTTP and RPC control and diagnostic services in addition to its primary purpose.


>Go was designed at Google to be used internally at Google

That's not true, expect in a pedantic way. Google was "designed at Google to be used internally at Google" only in that:

(a) a few people at Google, on their own, designed a language (mostly based on an older Plan 9 language some had helped built) - not at the request of Google execs, nor as an explicit company-mandated project to create a language to solve Google's problems. It was almost on of these "20% spare time" things.

(b) these people added the features that they thought, as far as they were concerned that would be nice for programming Google style stuff. Those were based on their ad-hoc (and quite idiosyncratic) intuition and personal experience, and not some special research into programming at scale, or from involving the company at large at it.

Go was designed at Google, but not "from Google", if this makes sense. E.g. a few googler's building a language on their own initiative and among other work is not the same as Google, i.e. some higher-ups, saying "we need a language of our own that's a match for our developers' challenges" and the company devoting resources for this.

Google has 2 language projects they really put money on, Javascript as V8 and Dart. They built a top notch specialist team, spent lots of money for promotion and branding, built an IDE and developer tools for both, etc.

Apple has had Obj-C and now Swift like that, MS has C#, etc. Those are language projects with a big weight of the companies behind them.

Golang was not like that, but, according to all official accounts and recollections, a grassroots project, by a small team:

"Robert Griesemer, Rob Pike and Ken Thompson started sketching the goals for a new language on the white board on September 21, 2007. Within a few days the goals had settled into a plan to do something and a fair idea of what it would be. Design continued part-time in parallel with unrelated work. By January 2008, Ken had started work on a compiler with which to explore ideas; it generated C code as its output. By mid-year the language had become a full-time project and had settled enough to attempt a production compiler. In May 2008, Ian Taylor independently started on a GCC front end for Go using the draft specification. Russ Cox joined in late 2008 and helped move the language and libraries from prototype to reality."

It was never officially intended to be "Google's development language" or had a major push from Google. Since then, and after the first few years, Google seems to have devoted more money and time to Go, and several Google internal projects have adopted it, but Google is fine with C++, Java, and Python as well.


It seems like you're setting up a false dichotomy. Researchers working at a company are often thinking about solving company problems, even if their particular solution doesn't have management support yet. Getting official approval often involves advocates selling their particular solution to management.

In particular, language designers typically have concrete problems in mind, even if they're inventing a general-purpose language. It's enough to say that Go's designers expected to use their new language at Google, so it needed to work well in Google's environment and solve some problems that weren't currently being solved well in that environment. If it didn't work at Google, then they wouldn't have succeeded at their original goal.

While it's not the most popular language at Google, Go is officially supported for server-side projects (communicating via RPC with many other internal servers), and has been for some time. That's a high bar that few languages meet, and about as official as it gets. If it didn't succeed internally then the Go team probably wouldn't have had consistent management support and stable funding over many years.

The other languages you mention (Dart and JavaScript) are considered client-side only within Google so they mostly don't compete with Go. For example, Node is supported for developer tools and external users, not because Google runs its own servers using Node. (Or at least that was true when I left.)


I don't think it's a false dichotomy. I don't think the teams that originally worked on the development of the Go language were Google production engineers of any sort.

I've been working at Google for 8 years and have yet to touch a Go codebase in production.

It's just not used that frequently.


If we're sharing anecdotes, I didn't write any C++ in over a decade at Google.

But I'm not going to dispute that it's the most-used server-side language, because I've seen the statistics.


How about the statistics you've seen on Go?


I don't remember (other than being up and to the right) and it seems not to be published.


>Researchers working at a company are often thinking about solving company problems, even if their particular solution doesn't have management support yet

Yes.

But it's another thing to have a company's the management, say "we want a team to create a language to solve programming at our scale" and throw full resources and money at it (at C#/Java/V8/Swift scale) and have it adopted by mandate for further development,

(as is often implied),

and another thing to have an independent group of a few devs at a company to sit on their own, and say "you know what would be interesting to try to build? A language to handle what we see as Google scale problems", and then getting some more resources, and seeing some at the company adopting it, as just one more language used along with several other company-approved languages for greenfield stuff...


Uh, Google is unlikely to standardize on a single language. The codebase is too big, the incumbent languages are too well entrenched, and they've heavily invested in supporting multiple languages. I'm not sure anyone claimed that either?

If anything the trend is towards slowly adding acceptable languages, but the bar is pretty high.


It's not that hard to imagine. Dropbox has talked about using Rust over Go for storage because cpu/memory efficiency was considerably better (4x if I recall) in Rust.


Go is pretty explicitly designed (as in there are quotes from the designers) with a low ceiling because Google doesn't want to trust their engineers to have nuanced programming taste.


This is why Go is good. If you care about writing beautiful code, Go might not be for you. If you care about getting shit done, it’s a great language.


Go is good at getting things done..until it isn't and you bump your head on the ceiling.

I know Go and Haskell equally well, and I get just as much shit done in Haskell and don't have to be a copy-paste machine sometimes. There are many benefits of Haskell and like-minded languages besides being "beautiful."

Feels like if the Go designers had the sense to include parametric polymorphism and sums (aka proven features from the 70's whose main "downside" is being weird to Go's target audience), it would be a great middleground.


Eh, in practice I haven’t missed explicit (closed) sum times as much as I thought I would. I started using Haskell many years ago (20-ish?) and have written some medium-size apps with it, and I’m very comfortable with using "switch x := y.(type)" in Go where I would use an ordinary "case y of" in Haskell. The only thing missing here is a compile-time check for completeness.

Missing polymorphism is a valid complaint but it hasn’t had the impact I thought it would.

Meanwhile, some of the programs I had written in Haskell I’ve rewritten in Go, e.g. due to problems with the use of finalizers (causing crashes!) in Haskell. You can chalk this up to poor library design but it simply isn’t a problem in Golang, which seems to avoid the finalizers more than other languages.

I’m going to continue using both Go and Haskell, I’m happy with both.


Emulating sums with interfaces isn't the worst but proper sums with exhaustive checking make it way easier for the consumers of your code to also perform matches in a maintainable way. When I emulate sums, I keep the matching internal and at best I expose a function that is the equivalent of an exhaustive match (which takes a function per case)

I've never run into finalizer issues (or finalizers at all really) in Haskell but I'm sure they exist. What libraries in particular used finalizers that you had issues with? In general though I find resource management much easier in Haskell thanks to bracket, ResourceT, and the (imo very good!) exception handling stuff. Async exceptions in particular are a surprisingly nice feature when combined with threading!

The biggest place lack of parametric polymorphism hurts is in concurrency. I have to hand-roll so much concurrency in Go and I don't really have a good option for abstracting over various patterns.

> I'm going to continue using both Go and Haskell, I'm happy with both.

Yeah same here. I'm glad to have them both in my toolkit.


I'm not asking for specifics but do you use Haskell at work? Is it in academia? I have worked for several large and small corps and no one has used Haskell. Most use C++, a lot use Go, a few use some rust, but none of them were using Haskell. I met people who use Haskell in their spare time because of their love for CS things, but never in production code.


Without specifics..I've been writing Haskell professionally & in production for several years now for multiple companies.

None were academia & most were commercial enterprises (that most people have heard of)


Beautiful is subjective, I know, but I think you can write beautiful code in Go. I like its simplicity and explicitness. Those things make it beautiful to me.


I think OP thinks about different scale.

It's not scale about number of users or data, it's complexity scale.

Last enterprise app i worked on had (provided) something like 10.000 different api calls (and yes mostly because of bad/wrong initial design decisions), and over 50k different possible queries that it could run against database. (my job was to "make database go faster")

Just running unit tests took longer than 6hrs (of course most unit test went out and connected to the database, because why not \s).

It all could be replaced with dozen or so 10k to 30k lines go projects each, that would be a lot easier to maintain and to scale out.


People may also mean different things by scale.

For me, scale means more micro-services. For another person it means a bigger monolith.


Which is probably the reason Google have made tools like a decorator to transform code with generics to valid go code.


“non-nullable types“

This is not some advanced feature. I would think this should be the default in any language.


All the popular languages from last gen, did not have non-nullable types.


We're accustomed to thinking of our world as really fast moving with constant technology turnover, but computer programming languages turn over at maybe twice the speed of human generations. A lot of the features that HN posters, including me, simply can't imagine a language without have yet to penetrate the top ten list of current programming languages at all.

My point isn't that this is wrong, but just to suggest that people's mental models incorporate the idea that programming languages actually move pretty slowly. There's a constant froth of undergrowth in the forest, but it takes a long time to establish a tree, and a long time for it to be supplanted by another.


And it was a mistake in all of them.


"I call it my billion-dollar mistake…At that time, I was designing the first comprehensive type system for references in an object-oriented language. My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years." – Tony Hoare, inventor of ALGOL W.


A good example of that is the difference of allocation handling in both languages: Go is simpler than Rust because in Go everything is automatic, while in Rust there is an explicit Box if you want to allocate on the heap. But avoiding heap allocation is easier in Rust than in Go because of the explicitness vs automatic management.


Good point, actually C is perfect example of this, it is a actually quite simple language that is not that simple to use.


Rich Hockey made a nice talk about this.

https://youtu.be/34_L7t7fD_U


Auto-complete turning his name into "Rick Hockey" does actually explain what's been going on with Rich's hair over the past few years.


How stupid of me not to spot this ...


he also had another talk https://www.infoq.com/presentations/Design-Composition-Perfo...

and I would argue there's idea that simplicity of the instrument/tool is not a problem in itself and we shouldn't lose it in pursuit of 'ease'


Yup. This is the origin of this idea.


Go can not be simple, because software itself is a complex beast, language really does not matter.

One thing that gophers say a lot is the count of Go's keywords is minimal. This is true literally, only 25 keywords as of Go 1.13. But it does not reduce the complexity of writing code, it just hides them.

For example, Go does not have the `new` keyword, but it has `make` function, which is so powerful that one of my friends actually thought `make` was a keyword of Go.


Can somebody post an example of when you would use generics to solve a problem in the real world, and how that would be implemented in Go instead?


Non-nullable types in are implemented using a combination of (powerful) enumeration types and generics.

Enumeration types in rust can be the usual ones you find in other languages:

    enum UserStatus {
      Anonymous,
      Registered,
      Confirmed,
    }
But their variants are not limited to names, they can be other things[1]:

    enum UserStatus {
      Anonymous,
      Registered{ email: Email, date: Date }, // a struct
      Confirmed(date), // a tuple
    }
Enum types can also be made generic. For example the Date type could be a parameter in the example above:

    enum UserStatus<T> {
      Anonymous,
      Registered{ email: Email, date: T }, // a struct
      Confirmed(T), // a tuple
    }

Which brings us to the Option type[2] in rust. It is an Enum with a generic type attribute, defined like this:

    enum Option<T>  
    {  
        Some(T), // a tuple with a single item of type T 
        None,
    }  
In rust, when a function says it returns an `i32`, it will always return an `i32`. It never returns `i32 or null`.

If you want not return an `i32` sometimes, you must specify exactly what else it can be. It is often convenient to return an Option<i32> instead. Then it can return `Some(i32)` or `None`. The difference with the "implicit null" from before is that the compiler will force you to "deconstruct the option safely", in compile time. You will never get a "Null Pointer Exception" in run time because of this.

For me, this is huge, and one of the reasons I like Rust more than Go.

[1]: https://doc.rust-lang.org/rust-by-example/custom_types/enum.... [2]: https://doc.rust-lang.org/std/option/


The simplest example I can think of is: Map(slice, func)

You can can implement this generically in Go using interface{} types and runtime type checking, but then you have runtime type checking failures.

A java/c++-esque "generics" implementation would be able to type-check at compile time.


You can implement it in Go using loops, and it's not significantly slower or more error prone.


Unless you want to implement that on top of your custom structure, like, for example, b-tree. Then you either need to write N different implementations for each separate tree or you need to use `interface{}`.


Yes, custom containers are one place that generics are very useful.

Go has the most useful containers built in. Most code I write doesn't need custom containers, even in languages with templates (I write far more C++ and Java than Go) -- so I find that I don't miss them much when I spend time in Gopherspace.


The most egregious missing container is "set", coming from Python (as many Go developers do). After debugging, reviewing, and writing at least 3 different local implementations of a set, I can't wait for generics so that it can be written once and then never looked at again.


I'm slightly surprised you found three different set implementations - `map[T]struct{}` is the idiomatic one i see everywhere.

I suppose it's not sorted or concurrent-safe?


I mean that people make their own per-file or per-package set implementation for their service. There's no point putting it in a common utility package even within the same company, because it's specific to the type of what they're storing.

`map[T]struct{}` is only a type, it's not an implementation. You still need a handful of methods, that might be named inconsistently in each implementation by different developers. I will be happy when I never have to think about this again


> I'm slightly surprised you found three different set implementations - `map[T]struct{}` is the idiomatic one i see everywhere.

Set storage is not really the concern here, it's the set-theoretic operations which make sets useful (generally speaking, there are cases where all you need is the set being a set e.g. deduplication).

So I wouldn't be surprised that GP found several different implementations of union, intersection, difference, symmetric difference, subset, superset, …


Yeah but you can't implement it generically.

That means for every conceivable input type and output type combination you have to have a new MAP or REDUCE function to handle it.

All Modern languages except golang and elm only require one implementation of MAP that can handle all type combinations.

It's called parametric polymorphism.


Elm has parametric polymorphism.


you're right. My mistake.


Java is one example of modern language without proper generics. They have syntax sugar for Object references, but you can do almost the same with golang using interface{}.


I'm not sure what you mean by 'proper generics', but Java does have compile-time generics, ie List<T>, <A, B> B foo(A bar), etc.


Proper generics would allow you to parameterize with all types including primitive ones, not just Objects. Check out java.util.Arrays class as an example of why Java's generics are not proper: https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.h...


You've got a kind of made up definition of 'proper'. Java has generics, they don't have the best implementation of it I've seen, but I wouldn't call it 'improper'. It's just not a word that has any specific meaning in this context.

I seem to remember an article being written a while ago that found Java's type system, with the addition of generics, was unsound. Maybe that's what you're trying to say.


Java generics is a syntax sugar. They don't add anything substantial. You can use Object (or another bound type) everywhere you're using generic type. You only need to add few explicit casts, which are added by compiler anyway.

You can do exactly the same thing in Golang. Just use interface{} everywhere and add few casts where needed. That is Java generics.


> Java generics is a syntax sugar.

That's not true, Java generics are not just syntactic sugar.

> They don't add anything substantial.

They add parametric polymorphism and type safety.

> You can use Object (or another bound type) everywhere you're using generic type.

Only if you go out of your way to do so, and ignore warnings. This has less to do with how generics work in Java and more to do with the fact that Object can be explicitly cast to another type and vice versa.

> You can do exactly the same thing in Golang. Just use interface{} everywhere and add few casts where needed.

No, it's completely different.


Yes, I know what it's called. I have implemented generics in compilers. I still don't miss them in most go code, in practice.


One example is n-dimensional arrays where each element could be a byte, uint16, int16, ... , uint64, int64, float, or even some custom vector. In a language with generics, you could describe the nD array more simply and the operators work depending on what was passed. The implementation can also be shared across a number of similar element types. In Go, though, you’d need to use interfaces and at some point you’d have to make tradeoffs about speed, simplicity of the implementation (e.g. hand-coding each element implementation) vs use of interface function calls to get properties, etc.

One example is to look at how Go handles the Image package vs other languages with generics. https://golang.org/pkg/image/


In fact with Go is even more obvious. Go don't have inheritance!

Traits is a way to solve it. But the problem is that in Go types are "too open". If you wanna constrain it more traits is not an elegant solution.

Generics is better in this case.

BTW: This is how is in Rust. Rust have traits but you fill the rest with generics.

I made a mini-go database command liner that I call from rust (so I get access to drivers that are not yet available in rust). Is much more boilerplate from some stuff that in the rust side is just Value<T>.

And type errors that Go have that rust don't.

I think that Rust & Go sharing some(most?) design principles (about how model types) make easier to see what each side make harder...


You can prevent external packages from implementing your interfaces by adding unexported method names to your interface. Usually I'd call it an anti-pattern for interfaces, but I've done it a couple of times when there was literally no conceivable way for an external package to implement something conforming to a particular interface for various technical reasons.


Yeah, most "good" uses of this technique are kindof abusing it to substitue for some other feature the language doesn't have, the most common of which I can think of is combining it with type switches to get a poor-man's sum types (what rust calls enums). You see this a lot in code working with ASTs, especially.


You can solve any problem in a language without generics, such as Go, C, Fortran-IV, or any other Turing-complete language, including the Turing machine "assembly".

The difference is in speed of producing such code, amount of bugs introduced, ability to understand, modify, and evolve the code, etc.


Don't know Fortran, but while C doesn't have generics you're getting similar behavior due to macros (for example min(), max() are macros) and weak typing. So the need of having generics there is much weaker.


There are a number of tools (and the "go generate" construct) designed to support code generation. If you just want macro-level behavior, that's easy to do.


Yes, and that's what I'm currently doing, but it is essentially just automating writing the same code multiple times. It still has issues, like having to remember to regenerate the code on change, or making harder to use IDE features to refactor. There are also some limitations it has.


> automating writing the same code multiple times

What would you call a language that forces you to write / use a preprocessor to introduce higher-level features?

Low-level. More specifically, low-level for the domain where you're working. C is admittedly low-level because it had to be minimal even for 1970 and close to the metal. Other languages usually enjoy less drastic design constraints.


Yep, that is why some were already using PL/I and Algol dialects almost 10 years before C came to be, alongside Fortran, Cobol and Lisp.


Just like Borland C++ 2.0 for MS-DOS did, before templates were started to be designed.

I let you research when it was released.


Even C does have generics now (_Generic) even if quite limited.


Neat, I had no idea.


I have functions that have to deal with hierarchical data objects, JSON like things. All collections are identically typed (although they have optional fields, different length arrays, etc.), but there is more than one type. In order to write simple code, I wanted to write one function that gets them from the database, one to update a single record, one to notify of changes, etc. So my top-level type to cover all of these collections is map[string]interface{}, which in TypeScript would be Map<string, any>.

Then I ran unit tests, and sometimes a weird error cropped up. Turns out I made an error in an append: instead of concatenating two arrays, I added the second array as the last element of the first. That was a time consuming bummer.

This problem could have been prevented by two approaches: copying every function for every type, or generics.

Go is a nice language, which makes some tasks easy to implement, almost fool proof, but it still lacks in other areas.


For a Math.max function you would normally have generic comparables and implement on that.

Go makes you implement max and min in all your codebases.


Same with float, if you for some reason need to use float32, tough luck. Makes you wonder why even have types that aren't supported by the standard lib.


literally any data structure that stores different types similarly. hashmaps, linkedlists, vectors, binary trees (assuming you can bound on some ordering function), etc, then all the operations you expect to work on those data structures. You could also define mapping operations that work on the containing structure and don't care much about the specific type of the contents. There's a million and one uses for generics (polymorphism).

In Go, I believe you need to give up type safety to write these implementations, by using Interface{}


A lot of the time you’ll end up with an implementation for each discrete possibility, but that’s pretty crappy especially if you have more than 3 or 4 discrete types you need logic for (though not always bad, some go libs follow this pattern e.g. sort offers a method for each basic type). You could use the empty interface (which is essentially ‘any’), but that generally requires you to do casting or runtime type checking and that’s suboptimal in a statically typed, compiled language.

Generally it makes more sense to describe an interface that can be satisfied by different types, and writing logic to handle that interface instead. Go may not have traditional generics but the interface pattern is certainly a kind of generic programming that can be used to achieve some powerful results. Because structs can satisfy many interfaces and interfaces can be satisfied with any kind of struct (if you want an in language example check out the io.Writer interface and the many functions that use it) you can end up with highly reusable code despite not having traditional generics.


In my experience Generics become much more useful, when your typesystem is strict. I use them quite often in Rust, especially when writing functions that should take many different types as inputs/outputs.

You can get away without them in many cases, but if you are working on stuff that you want to reuse, generics are the way to go. In combination with Rust’s traits this is especially cool.



>However, as you start pushing beyond "scale" Go is designed for, Go becomes less simple to use.

I wonder if that is really true, some really complex systems are written in Go and it does seem to scale well with that. Take Kubernetes for example.


I view it as a problem of scaling comprehension. Not scaling of performance.


Yes this is still true[1]

[1]: https://www.youtube.com/watch?v=4VNDjwzzKPo


kind of agree but also disagree. the runway for Go to go (pun unintended) from easy to difficult to use because of it consisting few parts is far longer than for a developer using a feature rich language to eventually shoot themselves in the foot. let me throw in another language in the mix to describe what i personally think is _almost_ the best tradeoff between this difficult and easy simplicity. Elixir. it has almost the same amount of keywords (as a proxy measure to API surface) as Go but on the other hand it also exposes metaprogramming and so, if you really want to, you can easily shoot yourself in the foot. but in both languages, as I said for Go, the runway for language usability to go from easy to difficult due to their limited features is very very long.


The tasks you do with any language are often not simple. And the simpler the language you use the more complex and harder the task gets. Assembly language is "simple" in the extreme but getting a task done in it is not trivial.


> However, as you start pushing beyond "scale" Go is designed for

Like Google scale?

That's what Go is designed for.


Yes, that is why Google mostly uses Java and C++ instead.


Or it may be because Java and C++ are much older so lot more code is written in those. And companies do not replace working code just because new language developed in house has few more features.


Except that lots of new code also get written in those languages.

Google's major Go projects appear to be gVisor, Android GPGPU debugger, Fucshia's TCP/IP stack and volume management tools and the download server as it was done by the Go team.

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


You forgot kubernetes



From description:

"In this talk we explore the devastating effects using object oriented antipatterns in go while building programs in a monorepo. ... Unknown to most, Kubernetes was originally written in Java. If you have ever looked at the source code, or vendored a library you probably have already noticed a fair amount of factory patterns and singletons littered throughout the code base. "

So basically writing Java in Go lead to clusterfuck codebase.


This is a joke talk (hyperbolic)

Most of the codebase problems in Kube are:

1. we depended on half the Go ecosystem at one point (docker, grpc, etcd, a few others) which is hard to do with dependencies in go (few standardized libraries)

2. performance of serialization mattered and JSON and protobuf were still raw at the time

I don't think Kubernetes is any worse than any other large (3M+ LOC), relatively young codebase I've seen on average.


That's why they created Go, to get rid of Java and C++ for things like download.google.com

They are not scalable for Google scale and are not maintainable at google scale.

That's why Google C++ guidelines basically disallow everything that is problematic in C++ and enforce a style guide to

> Avoid surprising or dangerous constructs. C++ has features that are more surprising or dangerous than one might think at a glance

> Avoid constructs that our average C++ programmer would find tricky or hard to maintain. C++ has features that may not be generally appropriate because of the complexity they introduce to the code

> Be mindful of our scale. With a codebase of 100+ million lines and thousands of engineers, some mistakes and simplifications for one engineer can become costly for many.


Yeah, better tell that to the Android, ChromeOS, Fuchsia, Maps, Search and Flutter teams.


You are sidetracking the conversation with a troll attitude.

I was replying to

> However, as you start pushing beyond "scale" Go is designed for

Go is designed for Google scale, I don't know anybody pushing beyond Google scale right now.

Anyway, I'm a nice person, you'll find your answer below

Enjoy

----------------

most of those projects are legacy software

did you expect search to be written in a language invented 10 years in the future?

Go is slowly replacing C++ to write the tooling.

C++ is simply not scalable enough for day by day use, Linus knew that 15 years ago.

Even Firefox is replacing it for its engine, it must mean something.

It can be done in C++ doesn't mean it should anymore.

Java is there because it runs in ads systems, you don't replace your core product overnight, just like in some banks you still find Cobol.

Java is the new Cobol.

Fuchsia is written mainly in C and Dart.

You wanna write an Haiku style microkernel?

C++ is fine, it's just a few thousands line of code.

Wanna write millions lines of code in a maintainable way at Google scale?

You don't start the project in C++ today, unless you're a crazy person.

Even ES5 Javascript is more maintainable than C++.


As a now Rust lover and prior (6 years?) Go lover, this article hits home. We have a lot of tech debt in my shop (who doesn't lol), and generally I advocate Rust or Go. I usually start the conversation with: Would this be a program you'd write in C++ or in Python? And then I advocate Rust or Go, with one amendment; data structures, and frequency of conversion/etc.

Having converted some large Go programs to Rust recently I now have the perspective that some programs, while perfectly fit for Go, are miserable to write without Generics. I had a program which did some pretty minor work, but it did so with a lot of varying data structures and writing it in Go was less than pleasant. No, Go might handle this better with generics. However I imagine this will only move the bar. Certain problems are just going to be friendlier (though, more complex!) to solve with a more advanced typing system.

I do look forward to the day that Go can get basic iterator behavior via Generics (if ever). Some of the smallest things in Rust were the biggest sighs of relief for me. Converting a slice of slices from one type to another is just a PITA in Go (in my opinion), and iterators make a world of difference for short, easy to read and comprehend implementations.

In my experience, Go's biggest fault is code bloat that "simplicity" offers. A simple goal could turn into a multi function implementation in Go that frankly shouldn't have to be. Then code locality gets worse over time, and suddenly simple doesn't feel simple. Rust's larger complexity (iterators and the like) improve code locality, and thusly simplicity, in my experience.

It's a game of tradeoffs. They're both great languages, and they both have their places in my company.


I'm a Rust evangelist too but I think it's going a bit too far to blanket recommend Rust and Go over C++ and Python. Python in particular is still an excellent language and there are many situations where it makes perfect sense to continue using it.

I can see a future where Rust and Go overshadow Python but that future isn't near. Python still has superior libraries for many use cases, a great community, easy to learn syntax, and little training requirement (most people know it or can become proficient in a very short time). Moreover with the type annotations and linting tools it's becoming a lot easier to write a large and maintainable Python codebase.


You only mentioned one tradeoff for Rust and that is the complexity of the overall language.

I would say that the complexity of the overall language is a one time hurdle. Once you get past that and once Rust has more mature libraries. Which language in your opinion is the better one?


For the record, I like Go. I also like Rust. But I really see them as being aimed at different markets. Go is a better Python/Java. Rust is a better C/C++. The interesting thing about Go is Rob Pike envisioned it as a systems programming language but any language with GC is a complete nonstarter as a systems programming language.

So as for the author's pretext (writing some tool in Go), personally my view is "why not?" not "why not Rust?". If someone had written that same tool in Rust I wouldn't be saying "why not Go?". Whatever floats your boat.

But here is the one part where I disagree with the author:

> Go is unapologetically simple

It is not as simple as it seems, just like every GC language out there. I find Go advocates in particular seem to dismissive of any GC complexities or downsides. Maybe because many of them just don't have the background in dealing with this from years of Java.

I worked on a team at Google that wrote and maintained a project using Go. Note that I never wrote anything for this project so my experience was second hand but close second hand.

One thing I remember was the Go binary blowing up on memory limits (10GB+) in production. Some debugging found there to be millions of Go channels (IIRC) that needed to be cleaned up. For whatever reason the GC didn't clean them up, possibly because of a non-obvious dangling reference, possibly not.

Anything can have bugs obviously. It's just a myth that GC is a silver bullet is my point.

There was a guy who worked on the Go team (David Crawshaw) who'd stop by every now and again. Occasionally I'd get into debates with him about Go and GC. It's from these that I established my general observations of Go pundits:

1. Full GC pauses and GC in general are totally not a problem in Go.

2. If they were, they're totally going to be fixed in Go 1.N+1 where N = the current version of Go.

(At the time, the argument David made was that Go's STW GC pauses were sub-millisecond so totally not a problem).


The memory leak due to channels not being cleaned up was almost certainly because goroutines themselves are not garbage collected when they're forever blocked. In other words, `ch := make(chan int); go func() { ch <- 1 }()` is a memory leak. This is analogous to spawning a thread that deadlocks by attempting to lock a mutex twice, and has nothing to do with garbage collection.

About the GC pauses, check out the latency graphs at https://github.com/ixy-languages/ixy-languages (being careful not to mix up the Javascript and Go lines). Go handily beat every other garbage collected language in latency, and is in the same ballpark as the two non-GC languages (Rust and C). The peak tail latency times are measured in the hundreds of microseconds even at the highest loads tested, so I think the pundits may have a point.


GC is more than just pauses. Throughput matters too; in fact, it often matters more than latency (for example, when writing batch jobs like compilers).


Sure, but that's not relevant to the discussion here. In this case the context was responding to the the statements that Go pundits claim the garbage collector has low latency but it always has bugs that will be fixed in the next version. I was providing evidence that the garbage collector actually does achieve low latency right now.

Also, there are ways to solve your throughput problems if you have control over your allocations, but there are not ways to solve your latency problems. Indeed, even if you don't have control of your allocations, you can often run multiple copies of your program to increase your throughput, but multiple copies will not help your latency.

Also, tuning your latency does in fact help increase throughput, because if your process spends a significant amount of time in allocations, its latency suffers. It's not a zero-sum knob between the two, especially in the presence of humans that care about tuning the performance of their applications.


You don't have much control over allocation in Go (in fact, according to the spec, you have none at all). Language constructs allocate in ways that are not obvious.

> Also, tuning your latency does in fact help increase throughput, because if your process spends a significant amount of time in allocations, its latency suffers.

I assume you meant to write "throughput" in that last sentence. That's not how throughput is defined. Throughput isn't "how long does it take to allocate", though that influences throughput. It measures how much time is spent in memory management in total during some workload. Optimizing for latency over throughput means you are choosing to spend more time in GC.


> You don't have much control over allocation in Go

Maybe you mean something different by "control over allocation"? Go does allow that, for example "The Journey of Go's Garbage Collector" talk has a good summary,

https://blog.golang.org/ismmkeynote

Specifically the sections about value-oriented programming are exactly that: Go allows the developer to avoid a lot of allocation by embedding structs, passing interior pointers, etc. Compared to Java or C#, it can have a much smaller number of allocations as a result.


> Specifically the sections about value-oriented programming are exactly that: Go allows the developer to avoid a lot of allocation by embedding structs, passing interior pointers, etc.

To elaborate, for example, indirect calls cause parameters to those calls to be judged escaping and unconditionally allocated on the heap. Go style encourages frequent use of interfaces such as io.Reader. So in order to interoperate with common Go code, such as that of the standard library, you will be allocating a lot.

> Compared to Java or C#, it can have a much smaller number of allocations as a result.

C# also allows you to embed structs within other structs and pass interior pointers. Java HotSpot has escape analysis as well, though it's less important for that JVM (and other JVMs), since HotSpot has a generational GC with fast bump allocation in the nursery.

> https://blog.golang.org/ismmkeynote

As I've mentioned before, I also have a problem with the conclusion of this talk: that generational garbage collection isn't the right thing for Go. The problem is that nobody has tested generational GC with the biggest practical benefit of it: bump allocation in the nursery. I'm not surprised that generational GC is a loss without that benefit.


> avoid a lot of allocation by embedding structs, passing interior pointers, etc

I can create a struct and take a ref to an interior field of that struct in C#, can I not? And if I wanted to badly enough, I could use unsafe code and take a pointer to the third byte of an interior field of that struct.


No, I meant what I wrote. The time spent in total in GC includes the time spent performing allocations. If you reduce the time spent in allocations with all else being equal, you reduce both latency and increase throughput. In that way, optimizing for your allocation latency also increases your total throughput.


Sounds very similar to "leaking" event listeners in other languages.

The handler and all of the scope it can reach stick around forever even though they should have been replaced or removed.


I'm not disagreeing with you, but the example you're using (the ixy driver) is a pretty poor one to compare garbage-collected languages: Go performs particularly good on this one, because the go implementation doesn't allocate anything on the heap (and then the GC has nothing to collect). Being able to work on the stack only is a feature of the Go language, but it has nothing to do with its garbage collector.


This example is just to show that pauses are not in the same class as any other fully managed languages. I haven’t looked at the code to know how much it stresses the garbage collector, but I agree it does seem unlikely to do so.

That said, being able to work on the stack does influence the design of the garbage collector. Throughput becomes less important because you can choose to allocate less, and pause time is more important because latency is harder to optimize. Latency is affected by your whole stack and can’t be meaningfully reduced by just adding more machines in the same way that throughput can be increased.


> This example is just to show that pauses are not in the same class as any other fully managed languages.

But your choose one of the few examples where there is no pause at all, because go's GC didn't even tick once!

Go could have a 10 seconds stop the world, it would still perform the same way here, that's why I say this examplei is a poor one for the point you're making (which is valid, even though one could argue than 10ms pauses are better than 30% of the CPU usage being the GC running (actual production usage on a Go service at my former work)).


> but any language with GC is a complete nonstarter as a systems programming language.

Check this out: http://www.projectoberon.com/

A full system (not just a language, but also custom CPU on FPGA, OS with GUI, applications, etc) where the language is garbage collected. The system has 1MB of RAM (by default). Note that the garbage collector is implemented entirely in the language itself.

The Oberon language was also a big inspiration for Go.


> The interesting thing about Go is Rob Pike envisioned it as a systems programming language but any language with GC is a complete nonstarter as a systems programming language.

AFAIK, by "systems programming" he meant "non-user-facing programming", not "kernel or embedded programming". So, basically, servers, batch programs and other os utilities.


I'd always assumed he was using "systems language" to just mean "as opposed to a scripting language" like perl or bash.


> any language with GC is a complete nonstarter as a systems programming language

If it was possible to write whole operating systems in garbage-collected LISP in the 80s, then it surely is possible to use a GC'ed language for systems programming thirty years later.


If you dig into those machines, anything vaguely real time including huge chunks of the drivers were written in user writable microcode.


Sure, but that's not really a universal indictment of a system or a language. For example, C gets a lot of justifiable criticism, but the ability to drop down to assembly when it's needed is considered a feature and not a bug.

The highest-performance gc systems (OCaml?) might very well be associated with functional programming languages, but the more relatable system for a lot of people might be the Oberon system, which did influence Go. I'm pretty certain Oberon did not rely on any hand-coded assembly to build a functioning workstation. (although, on the topic of microcode: the newest Wirth Oberon incarnation does have the student program an FPGA to run the system, as an exercise...)


Oberon for sure depends on hand coded assembly. It's just in the compiler rather than .asm or .s files, but it's still a big blob of assembly that the type system and GC have no real knowledge of.

Right now, state of the art for real time GC is that it's a huge trade-off between throughput and determinism. Like orders of magnitude lower throughput to be able to make guarantees that you'd expect out of a desktop system.


> It's just in the compiler

What's your objection to having the compiler emit "hand coded assembly?" It's not clear to me what axe is being ground here. (that's true for me in a big picture way here too. There's a very strong argument against gc languages being made here and I'm not sure if you're saying Oberon doesn't count as a gc language, or what.)

> Like orders of magnitude lower throughput to be able to make guarantees that you'd expect out of a desktop system.

Do you have a current citation for that?


More recent examples to check out were Spin (Modula-3), JX (Java), and House (Haskell).


It was also found in the 80s that any operating system written in LISP was dog slow. They even started making dedicated hardware interpreters of LISP to try and get around this. https://en.wikipedia.org/wiki/Lisp_machine

When you need to start designing your hardware around your computer language you know there's a problem going on.


> It was also found in the 80s that any operating system written in LISP was dog slow.

Lisp Machine operating systems were written in the mid 70s - developed for a new breed of computers: single user workstations with graphical user interfaces.

They were developed at Xerox PARC and at the MIT AI Lab.

> They even started making dedicated hardware interpreters of LISP to try and get around this.

The machines were built to get around slow Lisp systems on time-shared computers with tiny memory. They wanted to have dedicated machines with memory exclusive memory for that one user.

A bunch of stuff was then invented for those systems, including better automatic memory management like generational garbage collectors.

Around 1980 (!) such machines were extremely expensive and still had tiny hardware: around 1 MB RAM and approaching 1 VAX MIPS of speed. Mid/end 80s they had 20 MB RAM and 5 MIPS...

No wonder: an entirely new class of systems was developed on the slow hardware of the time.


To be fair those machines made sense at the gate counts we're talking about. They were true von Neumann machines with no caches, so the vast majority of the value add was putting the interpreter in microcode where it doesn't compete with the fetch bandwidth of the rest of the application. That's why CISC machines of the time tend to have mem(cpy/set/cmp) instructions, getting the instruction bandwidth out of the way of data heavy loads in a way that makes sense for the use case.

That's also why these dedicated machines tended to disappear right as instruction caches became more standard; an I$ solves the same problem in a way more general way. As long as the hot path to your interpreter fits in I$, it's six of one, half dozen of another.


It's not just putting the bytecode interpreter into microcode; there is the question of data representations suitable for Lisp, like pointers and numbers with tags.

Caching doesn't really help with this. Even when everything is nicely in I$ and L1, it costs cycles to do the type checking. What helps is compiler techniques: type inference to eliminate some of the checks. But hardware can basically bury the cost of the checks; you can do them all the time on all operands.


> started making dedicated hardware interpreters of LISP to try and get around this.

That's a misconception; the actual technology of that type provides an instruction set architecture (which isn't Lisp), to which Lisp is compiled.


The issue with Go and garbage collection is basically that Go's GC is just a GC tuned for latency above all else. The GC of Java HotSpot, on the other hand, balances throughput and latency, and is configurable to target one or the other. The latter is typically what applications want, even if it can't advertise the same pause times. For example, allocation is an order of magnitude cheaper in Java HotSpot than it is in Go.


How do you explain the throughput and latency numbers posted by https://github.com/ixy-languages/ixy-languages? Go is beating Java in both latency and throughput, and the author attempted many different Java GC tunings (https://github.com/ixy-languages/ixy-languages/blob/master/J...)

Maybe tuning for latency is the appropriate trade-off given Go's high level of control over allocation and data layout?



You're jumping to conclusions. I understand the distinction and that there are many options for Java. Please be charitable.

To rephrase the question more precisely: can you explain why OpenJDK, which has a multitude of garbage collector implementations tuned for both throughput and latency, performs worse on both throughput and latency than Go on this benchmark which has a garbage collector that, according to the original statement, is "tuned for latency above all else"?

Perhaps OpenJDK is problematic in other ways (the author of the benchmarks suspects it's JIT induced, even though that's attempted to be controlled for), or perhaps this test depends less on the GC for some reason. Or maybe typical Go programs don't require as many allocations making allocation time much less important making tuning for latency the correct engineering decision. Would HotSpot do significantly better as is claimed? That's the sort of interesting technical discussion I'd like to have, instead of pedantry.


Sorry about that.

I guess being a long time part of Java eco-system, kind of gets tiring of having outsiders (in general, not referring to you) always mixing up Java with what comes with their PC, as if C would be defined by GCC.

As for the actual question, naturally having value types helps reduce GC pressure. Which on Java's case could be helped by trying out Graal or other JVMs that do better job at escape analysis than Hotspot. Alternatively, although it kind of is cheating, using the language extensions from either Azul or IBM for value types.

In any case, when inline classes (aka value types) arrive, Java can easily do the same as Go here.

JIT and de-optimizations play a role certainly, and are to blame for some performance impact, which can be further improved if a JVM like J9 gets used, given that it allows for PGO across runs.

Finally, while Hotspot has good defaults, tuning all the knobs is a science, even with help of J/Rockit and VisualVM, which opens the door for performance consulting.

The JVMs I mentioned, are targeted at soft real time deployment scenarios, as such they have APIs for low level control of memory management, while also supporting AOT with PGO compilation, thus allowing for low level fine tuning out of reach for the regular Java developers (pure Java SE implementations).


Thanks.

Since most of the explanations that you gave involve things like controlling value types and data layout, it sounds to me like you might agree with the statement that in languages with better control over allocation and data layout, tuning a garbage collector for latency over throughput can be a good idea because your time spent allocating is less important. Is that fair?


In this case, "tuning for latency over throughput" means "not having a generational GC". Whether this choice makes sense does not depend on how much a program allocates. Rather, it depends on whether the generational hypothesis holds. The generational hypothesis is one of the most powerful and consistent observations in the entire CS field. It certainly holds for .NET, which has a very similar memory model to that of Go, and therefore .NET has a generational garbage collector.


It's not as simple as does the generational hypothesis hold, there are real downsides to having a generational GC - it means you need a copying collector. Go heap values never move, which dramatically simplifies everything else (e.g. other threads don't need to be paused to update heap pointers, reducing STW times).

Some previous discussion https://news.ycombinator.com/item?id=17551012


You don't need stop-the-world pauses for generational GC. As long as your write barrier ensures that pointers to a young object are entirely local to the TLAB that the object is allocated in, you will never need to stop other threads to sweep the nursery.


Go's a pretty simple language.

It should be straightforward (not easy, because compilers aren't easy) for somebody to write another implementation, tuned for another use case - I can't imagine it wouldn't be easier than doing it for Java.


> How do you explain the throughput and latency numbers posted by https://github.com/ixy-languages/ixy-languages?

Because those graphs measure overall throughput and latency in an I/O setting, not GC throughput and latency specifically. There are many other confounding factors. In particular, the application in question has been tuned to perform as few allocations as possible, so GC throughput will naturally not show up as much as in other apps!

Thanks to its generational GC and TLABs, allocation in HotSpot is like 5 instructions.

> Maybe tuning for latency is the appropriate trade-off given Go's high level of control over allocation and data layout?

Go doesn't give you control over allocation in a meaningful way.


I believe we misunderstood each other. In particular "the application in question has been tuned to perform as few allocations as possible" seems contradictory to "Go doesn't give you control over allocation in a meaningful way."

Allocation being fast isn't very important if you don't spend a significant portion of your time allocating. Isn't it possible that some languages (maybe even Go) by the nature of their semantics spend significantly less time allocating than other languages, and so tuning a garbage collector for latency is the appropriate decision?

Why do you seem to believe that the Go application has been tuned more to avoid allocations than the Java application? If it hadn't been, then you would have to agree that tuning the garbage collector for latency is better because you can invest some effort into your application to have it perform better on both throughput and latency.

Or is the argument that because "[throughput] is typically what applications want", this application is not a good example of most applications (according to what measure of "most")?


You can sometimes use unnatural programming patterns in both Go and Java to reduce allocations in practice. Nontrivial programs will always allocate, though, because various language constructs allocate in ways that are not obvious, and escape analysis is complex and hard to reason about. And the allocation semantics are not part of the language and are subject to change: this is what I mean by it not being meaningfully controllable.

The most salient difference between Go's GC and HotSpot's GC is that the latter is generational. There is no convincing reason I've seen for Go not to have a generational GC, which would dramatically improve throughput by enabling bump allocation in the TLAB. The tiny amount of latency that this could add is by no means worth the cost of making allocations an order of magnitude slower. Allocation in HotSpot is five instructions.


> You can sometimes use unnatural programming patterns in both Go and Java to reduce allocations in practice.

This whole "you can't control your allocations in Go" thing is a strawman. You don't have absolute control because of escape analysis, that's true. But there's a pretty wide gap between "oh I'll just move this allocation point out of a loop" and "oh I wrote a bunch of unnatural code". It's pretty idiomatic to manage allocations in Go, just look at all the posts about using pprof.

Besides, the spectrum of "clarity" (or whatever) to performance is present no matter what language you're using. Ex: ripgrep takes on more complexity so it can search a big buffer of text instead of just a line [1]. I wouldn't call that "unnatural" at all, just systems programming.

[1]: https://blog.burntsushi.net/ripgrep/#mechanics


> But there's a pretty wide gap between "oh I'll just move this allocation point out of a loop" and "oh I wrote a bunch of unnatural code".

Here's just one example: The Go compiler judges that parameters to indirect calls such as interface method calls as escaping. So in Go if you don't want to allocate you have no choice other than to avoid interfaces. But Go heavily encourages the use of interfaces, especially in the standard library.

In C, C++, and Rust, on the other hand, you can use indirect calls without allocating, because the language guarantees the escaping behavior. This is a significant difference.


My experience in the Golang side of this is limited to profiling and optimizing Kubernetes, which doesn't have extensive use of interfaces in general, but over the last 5 years I would say 90% of the wins we have have been:

1. optimizing allocations away in serialization

2. optimizing allocations away in api or biz logic in critical paths around that

3. algorithmic improvements on certain naive code paths

4. blocking

... distant gap

5. everything else

Serialization is its own can of worms in Go, and many places we tried to optimize came down to trying to avoid a) stupid (don't use pointers to naive objects) and b) obvious (use value types) and then hit a wall where there weren't many cheap wins.

I probably can count on one hand the number of hot paths which were interface related, so while I've occasionally been annoyed at the forced allocation moving into an interface, it's rarely actual something I've gotten a win from.

That's just one particular experience, but the AMAZING integration with pprof from very early days has saved me far more time in improving perf than other Go annoyances has cost (relative to my experience in Java 2005-2011).


Oh yeah, totally agree that you have way more control in C/C++/Rust, and that's non-negotiable in many important cases. I just don't think that's the same as "you don't have any meaningful control over allocations in Go"; you still have a lot of control. It's a spectrum, and if you don't do enough research you might find you need more control (you used Go) or you might also find you signed up to deal with more low-level memory management than you needed to (C/C++/Rust).


And yet here’s an empirical benchmark providing evidence against both of those claims. What empirical evidence do you have supporting yours? Can you provide them?

Who defines when a construct is “unnatural”? Can you show examples of “unnatural” patterns in the provided benchmark programs supporting your hypothesis that you have to write unnaturally to control allocations? If not, why are you making those claims?

Why is throughput of allocations important for “most” applications? How much time is typically spent on allocations in those programs? What is a typical application? What about typical applications in Go specifically since that’s the only language that Go’s garbage collector matters for?

Also, why did you bring up the point about generational collectors? It seems like a red herring. This discussion has been about tuning for latency, and how in this benchmark, Go’s implementation did better on both throughput and latency than any OpenJDK collector under any tuning, and that tuning for latency is possibly a sound engineering decision.

I’ve also noticed that you’ve made no explicit attempt to answer my more difficult questions, and if you continue to do so I will no longer assume you are acting in good faith.


Again, that benchmark isn't a benchmark of garbage collection; it's a whole-program benchmark.

There's plenty of empirical evidence—a wealth of papers dating back to the 80s—that generational GC provides better throughput than non-generational GC on most applications. I'd be extremely surprised if a properly-implemented generational GC with bump allocation in the nursery wouldn't improve the performance of Go's GC by trading off a small amount of latency for increased throughput. The reason why you won't see a benchmark like that for Go is that nobody has implemented such a collector for Go.


No cited references, no answers about what “most” is, no answers about what “most” is in the specific context of Go, no answers about any “unnatural” patterns existing, no answers about the claim that it was “tuned” to reduce allocations, answering a question that wasn’t asked dodging the question that was (generational having better throughput vs when does throughput matter).

There’s no way I can believe you are acting in good faith. It’s obvious to me now that you’ve just been trolling for years every time you discuss the topic of garbage collectors. I’m no longer going to engage.


I am not sure I'd go that far, but I want to point a thing out:

Back in the 90s, it was a running joke that if you were going backpacking, you should take along a 3' length of fiber optic cable. "If you get lost, just bury the fiber optic cable and ask the backhoe operator for a ride back to town."

In at least three programming-related communities I'm in, I have made a quip about "I think I'm gonna replace the 3' length of fiber optic cable in my backpacking gear with a short post about Go's garbage collector", and people have filled in "so if I get lost, I can just take the post out, and then ask pcwalton for a ride back to town".

I can't say whether or not the argument is made in good faith, but man, there sure is a lot of it.


If you're trying to write high performance Java, that driver was a pretty poor attempt. While not idiomatic in the normal sense, there are a lot of low-latency, high throughput Java apps that never allocate after startup (and maybe some object pool growth before hitting a steady state). That benchmark was even running out of memory when using the non-collecting GC which shows they were allocating quite a bit, and their allocations probably escaped and needed to be moved off the TLAB and become expensive. I'm not sure they really knew how to write performance Java.

I've been a part of systems that process million of events a second and never allocated. These will beat most C++ systems I've seen, and Go doesn't even start a chance. Java and HotSpot can do some amazing things (esp around inlining), but you do have to do them a little differently (and carefully), but at least you can. My experience with Go is that I never had the same level of control, and I don't think it is possible.


> There is no convincing reason I've seen for Go not to have a generational GC

Well, they explored generational GC (after admitting that their uber-ultimate GC that they marketed as future-proof until 2025 could be better): https://blog.golang.org/ismmkeynote

> It isn't that the generational hypothesis isn't true for Go, it's just that the young objects live and die young on the stack. The result is that generational collection is much less effective than you might find in other managed runtime languages.


> any language with GC is a complete nonstarter as a systems programming language

As someone else hinted, your rant loses everyone familiar with the history of computing right here.

"There were bugs in Go at one time" is not much of an argument against it either.


And in the meantime Java gets its GC finally fixed to not stall. Yes, it's a real problem, ignoring it for many years costed some people lots of productivity.


Having used both, I admire Rust as an intellectual exercise, but would write back-end web stuff in Go.

Go is an OK language, not a great one. The real advantage is that you have the libraries that Google uses for their own web server side stuff. Those have been pounded on by billions of transactions, and that the special cases have been handled. There's one well-tested library for each major web service related function.

Rust's libraries don't have that volume of use behind them. Look up "http" in the Rust libraries. You find "Note that this crate is still early on in its lifecycle so the support libraries that integrate with the http crate are a work in progress!"[1] (Rust enthusiasts may comment with a complicated excuse for why this isn't a problem, if they like. That's what the official document says.)

[1] https://docs.rs/http/0.1.18/http/


Your point isn't wrong, but you've linked to a weird package to make it; the http crate is an attempt to standardize some of the types used across multiple implementations. That takes longer than actually building the functionality, as the whole point is that you have multiple implementations, and then consolidate afterwards.

It's also not an "official document"; that's just a package that exists. It's not run by the Rust project.


Rust isn't there for http services yet (if you're being conservative). Come back this time next year and things will be different though.

A nice thing about Rust is that there is so much safety is built into the language that even random libraries tend to work quite reliably.


I don’t think it is necessary to integrate http as a standard library. CGI allows using common wed servers, and there is a separate set of ecosystems around those servers.


You still need most of the http libraries for parsing requests and generating responses with CGI. You just skip the TCP bit.


>>Go is an OK language, not a great one.

This is probably the most succinct and accurate description of Go. It's fairly decent and reasonably reliable at small/medium scale (which is why it is popular among the microservices crowd), but has no outstanding features.

IMO by far the biggest reason it became popular is that it is backed by Google (which makes it a "safe" choice). If it weren't, most people would not have heard of it today.


"Decent and reasonable at small/medium scale" is a weird way to sum up Go in a Go vs. Rust discussion, since Go is deployed at drastically larger scales than Rust is currently. I don't think there's any validity to the idea that Go tops out at medium-scale projects; in fact, a more common argument against it is that it makes sacrifices to facilitate large-scale programming that people don't like.


> Go is deployed at drastically larger scales than Rust is currently.

I think it's important to define which "scale" you're talking about here; Go is deployed at a larger scale in the sense of number of deployments, but both are deployed in production inside the largest tech companies in the world. For example, Rust is now at the core of all of AWS Lambda (and Fargate).

That being said, I do think that "decent and reasonable at small/medium scale" is an attempt at damning through faint praise, and is certainly not how I'd characterize Go in any sense.


Go is doing more stuff in large-scale deployments, processing more traffic, doing more transactions, than Rust is. I don't just mean there are a lot more Go backend projects (there are). I mean that some of those projects do a lot more work than Rust does right now --- I don't mean ubiquitous infrastructure things like Docker and k8s, I mean purpose-built components for specific large-scale applications/platforms.

For instance: "Today, most Dropbox infrastructure is written in Go." Or: "Today Go is at the heart of CloudFlare's services". Or: "Rend is a high-performance proxy written in Go with Netflix use cases [ed: all internal memcache] as the primary driver for development". Or: "The search infrastructure on [Soundcloud] Next is driven by Elastic Search, but managed and interfaced with the rest of SoundCloud almost exclusively through Go services." Or: "How We Built Uber Engineering’s Highest Query per Second Service Using Go.". Or: "Handling five billion sessions a day – in real time [at Twitter]".

I don't see where there's room for a "that said" here.

Both languages are obviously capable of scaling.


Yeah, as mentioned, I don't think that's true anymore. Historically, it has, but Rust has had a lot of high-profile, high-traffic deployments in the last year. But honestly, the high order bit is:

> Both languages are obviously capable of scaling.

This is clearly true, and I agree fully. I don't really want to argue "is this deployment really larger than that deployment", as it's kind of silly. The point is that both have demonstrated the ability to scale to the largest workloads, and so knocking either one of Rust or Go on this axis doesn't make sense.


I like Firecracker too, more than I like Docker (which is, obviously, Go). I don't have any problem with Rust. The parent comment, about Go being suited for small/medium scale programming, was wrong.


PHP is doing more by whatever metric you want than Go or Rust. Does that mean it is better?


That's not the question the thread is discussing (thankfully; "better" or "worse" languages is a stupid message board debate to have).


Proof being that almost no one cared about Limbo and Plan 9 keeps being referenced, while forgetting about Inferno.


If rust had all the libraries you needed. Would it be a better language than golang?


I can see moving from C/C++ to Rust for buffer-copying style codes, but for most code general-use programs the cognitive load required to deal with resource management seems way to high to move from from Java/C#/Node to Rust. (I just got called about a shop moving their Node code to Rust, and I can't imagine why any code that would make good Rust code was written in JS in the first place.) Go seems like a non-starter to me -- sure features can be abused (as implementation inheritance has been for years) but to omit generics at this point just seems silly.

As for moving from C# to Rust, you can learn to program with the Span<> APIs and get the non-GC perf. of Rust for the most part and still keep GC and other C# niceties (like the whole toolchain and ecosystem...) in much less time than a wholesale move to Rust. As as for message-passing/agent/channel programming, you can certainly do that in C# if you like, and though there is certainly a lot a foot-shooting that can be done with async and thread contexts, in general the Task<> system is a joy to use compare to almost anything else out there, not only for async code, but for concurrent code.

Don't get me wrong, I always thought that C++ was a "worst of all worlds" language and appreciate moving to Rust from there, but for most user-facing applications that tend to have complex object lifetimes, I just can't see why you'd want to deal with RIAA when modern GC's are so darn good.


Have you actually used Rust? I spend very little time (or code) on resource management when writing Rust code. And when I do, it's usually the compiler helping me understand an aspect of my design that doesn't work as I expected. Sure, other languages might allow me to gloss over that, but that will likely come down to finding the edge case in production.


Yes - rust type system is very restricted in what it can express because it needs to be verifiable at complie time - this means that some trivial things become a huge pain to describe to rust compiler and rust programmers seem to develop this Stockholm syndrome relationship with it and start singing it's praises on every turn. Memory management in dynamic graph structures with cycles is hard without GC, rust type system is not equipped to deal with this. Although you can still have leaks by holding on to unused references the problem becomes a higher level one and describing the structures/managing them becomes much simpler with a GC.

That guy saying that you can easily avoid GC in C# is also not realistic or doesn't know what he's talking about - even language expressions can lead to object allocation in C# - you really need to know what you are doing to avoid the landmines - it's very clear the language wasn't designed for this - if you need a lot of code like that.


LOL, I'm "that guy" I think... Glad we agree about the complex/cyclic structures at least.

Regarding C#, I said for the most part. I stick by my assertion that (a) GC is quite efficient (esp.w/gen0 objects generated by your "expressions") and far from "lame" and (b) working with value types and Span<> (like Rust Slices) together can significantly reduce GC pressure to the point where it's acceptable, assuming there was actually an issue to begin with. Doing this in C# on hot-paths is certainly much less effort than moving to Rust wholesale, and you haven't thrown out the GC baby with the bath-water.

All the things that you do to make Rust efficient - allocate on the stack instead of the heap when you can, pre-allocate when size is known, use static/nested lifetimes, use slices to owned structures, etc., you can do in C#, but you don't have to worry about it until it actually becomes an issue.

If you're in the kernel or embedded system or the middle of a game rendering loop, then sure, Rust's compiler guarantees make this style programming easier if that's the way you want to go - and Rust has macros and other features that C# lacks. (Although the .Net JITTer is going to generate type-specialized methods and inline them, etc., w/a Rust macro you know up-front exactly what is being generated, which is nice.)


My experience in avoiding C# GC comes from XNA era when Microsoft had this shitty .NET runtime for XBox 360 allowing anyone to develop for it - the GC was so bad you had to be very very careful to avoid it and stuff like using foreach would allocate (AFAIK nowdays it can optimize it away in some cases but not in all and you still need to know when if you want to avoid allocation). People wrote code which avoided GC but it looked nothing like C# and in fact was more verbose than something like Rust or C++ and you were in uncharted territory since no allocation programming is not really C# thing and using high level features would just lead you into invisible traps. So you ended up using preallocated arrays, static functions, globals, etc. etc. in a language with poor generics (you can't even specify an operator as a generic constraint), no top level functions, etc. etc.

I've kept track of .NET progress since then and read about stuff like Span and ValueTask, ASP.NET Core team perf did a good job leveraging those for optimisations (eg. their optimised JSON parser) in such scenarios I agree C# with low level stuff sprinkled in is a good choice

But if your problem domain requires avoiding GC throughout and having better understanding on what the abstractions will compile to pick a language thats designed for that. It's like when I had to review some Java 6 code which tried to work around the fact that Java doesn't have value types with byte arrays - it was just soo bad compared to even C equivalent it was better to rewrite and go trough JNA.


You are right that cyclic mutable graphs are very hard to model in Rust.

But very little code actually looks like that in reality. A lot of code looks like that by accident.


Cyclic mutable graphs are very hard to model in _safe_ Rust (without arena). They can be implemented easily in unsafe Rust, using pointers instead of references, like in C.


They can also be implemented by not using pointer spaghetti ;)

Even in C++ I generally prefer to use handles over pointers for cases like that because graphs are just plain hard to get right, and if you use handles you can get a nice error message when something goes wrong instead of a segfault.


To be fair, I spend the same amount of time on resource management in C++ too (compared to Rust). Unless I'm writing a data structure (happens occasionally) or some explicit resource management layer (texture pool, mesh LOD system, etc), I delegate management/ownership to the system designed to handle it.


I'm a hobby Rust user and I love the language, but I definitely feel the heavy hand of the borrow checker a lot of the times. It's the only language where coding can something seemingly simple can push me to the limit of my abilities and understanding.

However, I usually pick Rust for performance-critical code with quite a bit of concurrency, so the problems involved are inherently tricky. I'm gaining more and more intuition about ownership and rustc-friendly software design, and so I hope I'll be able to better understand if my struggle is due more to limitations of the compiler and Rust's semantics or the limitations of my skills.

For comparison, I've written ~6000 SLOC of Rust in my lifetime, so I do have some experience but I'm definitely not an expert.


I use it occasionally and trying to port an old Gtkmm toy project to Gtk-rs was an eye opener regarding both build performance and the pain of dealing with callbacks in GUI code.

First having neither a build cache, or binary dependencies, means that C++ wins on the "make world" build, because naturally all my third party libraries are already compiled.

Then there is incremental compilation, incremental linking, pre-compiled headers and modules to help with the rest.

GUI code then becomes a fest of Rc<RefCell<>> in event handlers, or using arrays with vector clocks workaround as shown on Catherine's talk.


Could you link that talk?


"RustConf 2018 - Closing Keynote - Using Rust For Game Development by Catherine West"

https://www.youtube.com/watch?v=aKLntZcp27M


I have a bit -- but I don't have a project that makes sense to do in Rust at the moment. I'm mostly doing long-running symbolic AI code where I don't want to mess around with object lifetime because imposing the concept of "ownership" would be difficult. (F#/C# is the best fit at the moment.)


It takes a little while to wrap your head around it -- I think it took me a solid 3 months before it clicked. Once it clicks though, the lifetime stuff is completely automatic. You really don't think about it. I think because Rust's syntax is so similar to what you might see in a C derived language it adds to the cognitive burden of learning lifetimes the first time around, but IMO, Rust should be the first thing people learn.


> long-running symbolic AI code

That sounds cool. An obvious question is, why not a lisp? But generally do you find f# works well on dotnet mixed with c#?


I love Lisp and Prolog the languages and I went there first -- the issue always turns out to be either (a) they make hard stuff really easy, but stuff that should be easy really hard, and (b) IMHO nothing even comes close to the .Net ecosystem when it comes to debugging tools, libs, etc. Clojure+Cursive comes closest, but the JVM world is a bit of a turn-off for me -- maybe I've been away from it too long, but it just seems a bit clunky. (And of course, reified generics, value types, etc., aren't available in the JVM.) The meta-programming story isn't as good as in Lisp/Prolog of course. JS (as others have said) an acceptable Lisp and has some advantages for the type of work I'm doing -- my problem with JS (and Typescript) isn't the language, but the run-time environment of Node, which makes things like true parallel programming a PITA and incurs serialization overhead between workers.


Thanks fort he reply. I've been looking at f# or ocaml for a symbolic code generation project, but haven't been able to decide.


Ah the Generics, the fact that empty interfaces have been accepted as the way to get things done is absolutely terrifying to me. I understand that C++'s approach to generic type usage has slowly evolved into the most absurd collection of left angle brackets out there but Go seriously needs to shape up about generics, when people are regularly hacking your logic rules and advising others to do the same thing then... maybe there was an error in your rules.

Go is interesting and I really wonder how it'd be doing if someone went all in on the multi-return syntax sugar (pretty awesome stuff), the prohibition (mostly) on exceptions, channels and trivial threading and a bunch of other nicely packaged features while giving some ground on Generics. All languages have their warts, but I think this one is particularly easy to address - though it may take a major version bump and introduce some BC breaks I feel like those could be minimized, the biggest cost would be library incompatibility.


Multiple return values are a lot less general and harder to use than tuple values; you can't store them in maps or pass them over channels or compose functions (while f(g()) works, f(g(), h()) is not allowed).


You could implement an entire corner of the language to work with special-cased / non-reified MRV, that's what Common Lisp does.

Go doesn't do that though. There is some magic for builtins (e.g. variable-arity MRV) but as usual mere mortals need not apply.


> C++'s approach to generic type usage has slowly evolved into the most absurd collection of left angle brackets out there

C++ templates are a purely-functional Lisplike language. There's nothing strange or absurd about it; it's a very vanilla way to approach string rewriting systems.

(It's ugly to read, yes, but then all Lisplikes are too.)


> but for most user-facing applications that tend to have complex object lifetimes, I just can't see why you'd want to deal with RIAA when modern GC's are so darn good.

My day job is ~half c++ and ~half c#, and I couldn't possibly disagree more.

The first problem is IDisposable and event handlers. Because c# doesn't have anything like weak pointers, event handlers require that you manually dispose of half your objects. In c, there's a simple rule: you always dispose of your objects. In c++, there's a simple rule: the destructor of your data structure/smart pointer always disposes of your objects. In c#, the rule isn't simple. Half the time, you have nontrivial destructor doing nontrivial things, the other half the time you can leave it up to the GC. But it isn't necessarily immediately clear which is which.

The second is `using`. Again, you must necessarily mix the semantics of non-deterministic GC cleanup and deterministic RAII cleanup. Which is worse than having a simple rule that always works.


> c# doesn't have anything like weak pointers

https://docs.microsoft.com/en-us/dotnet/api/system.weakrefer...

https://docs.microsoft.com/en-us/dotnet/api/system.weakrefer...

https://docs.microsoft.com/en-us/dotnet/api/system.runtime.c...

> event handlers require that you manually dispose of half your objects.

You probably have software design issue on your day job. C# events can be great, but they're not a silver bullet.

For loosely coupled application wide events, event aggregator pattern works better, see IEventAggregator from Caliburn.Micro for an example.

If the two sides of the event handlers need strong coupling for a good reason, an interface or abstract class for the consumer works better, see ExpressionVisitor from System.Linq.Expressions for an example.


You forgot to add safe handles as well.

https://docs.microsoft.com/en-us/dotnet/api/system.runtime.i...

This and the the thread of the other day regarding having to use C for what C# is capable of, apparently many don't look on their toolboxes.


There is obviously a learning curve but it's hard to argue that:

#[get("/hello/<name>/<age>")]

fn hello(name: String, age: u8) -> String {

    format!("Hello, {} year old named {}!", age, name)
}

is more complex than express + Node. With added bonus that you get validation for free


I'm a firm believer in that Rust is the greatest imperative programming language ever designed. But your example is a pathologically simple problem with a horrendously complex solution in Rust. In responding to a simple get request you have to lean on both function annotations and a macro. That code would be so much nicer in Ruby.


Would you post an equivalent code in Ruby ? (that like above does validation of inputs)


If you need input validation like that, which you normally wouldn't in Ruby because your models should do it for you, I'd probably use grape or something similar:

  require 'grape'

  params do
    requires :age, type: Integer
    requires :name, type: String
  end
  get "/hello/:name/:age" do
     "Hello, #{params[:age]} year old named #{params[:name]}!"
  end

In any case, my beef is more with the macro than with the function annotation, which is rather spiffy. The macro hides the fact that string manipulation in Rust definitely requires a little bit of manual reading. And I feel that as soon as your app becomes more complex Rust will just start getting more in the way. And to an experienced Rust dev that might not be a big deal, because an experienced dev knows how to work with strings or which memory management strategy so it won't bog them down. But if you're just working on something and you quickly want to whip out a service that tells people what age they are, I'd definitely go for a quick Ruby or Go service.


One of the things this API can do for you is not give you just strings, but fully typed instances of structs. And you don’t need to do the conversions yourself. So you’re not wrong, but it’s easier than you may imagine.


Not exactly the same, since there's no UInt8 in Ruby.

    require 'sinatra'
    get %r{/hello/([^/]+)/(\d+)} do |name, age|
      "Hello, #{age} year old named #{name}!"
    end


I am guessing age would be a string though?


yes, the regex part handles the validation.


> the cognitive load required

There's no cognitive load required to code in C++. You just don't know the language.

There's no easy tutorials or learning materials for it, but it's not hard.


Im not sure if this is sarcasm, honestly. C++ is one of the hardest languages to use, full of edge cases, it blows my mind every time I pick it up.

> You just don't know the language.

Nobody knows the language haha. If anyone tells you they know C++, they're absolutely wrong. This has been, IME, the easiest way to know if a candidate is over-stating their resume -- if they say they "know" C++.


> C++ is one of the hardest languages to use,

No.

> full of edge cases,

Definitely no.

> it blows my mind every time I pick it up.

I'm tempted to say "programming is not for you", but I'll be charitable and just point out that you've never learned the actual language to make statements like that.

> If anyone tells you they know C++, they're absolutely wrong.

That's a load of crapola. It's impossible to "know C++" in the sense of knowing the ISO standard for C++; but that is also true of any other language with a real standard.

Languages without standards, of course, are worse in every way.


C++ (as a result of it's C roots) has one of the most obscure and arcane library assumptions out there. Different platforms only need to roughly adhere to a standard and this can cause wildly different usages. Additionally it's the only language I've ever worked in that provides ample opportunities to blow off your own foot while copying a string. Lastly `const` what does it do where and why and should everyone ever use it... C++ is one of the simplest languages at a surface level, but the standard is incredibly dirty.


> Additionally it's the only language I've ever worked in that provides ample opportunities to blow off your own foot while copying a string.

False.

    std::string x = y;
There's no way to mess this up.

Again, learn the language. There is no such thing as "C/C++". 99% of the problems stem from the fact that people don't understand that C and C++ are completely different languages, despite sharing one compiler.

Learn C++ as itself and your problems go away.

> Lastly `const` what does it do where and why and should everyone ever use it...

`const` is a contract that means "I will not modify this object here". Nothing confusing or complex about it, unless you're trying to shoehorn this concept into C semantics somehow.


>> C++ is one of the hardest languages to use

> No.

Most defenses I've seen of C++ boils down to something along the lines of "you're using it wrong (tm)".

I've seen enough C++ to take the side of "the design is probably wonky if it's that easy to do the wrong thing."


> Most defenses I've seen of C++ boils down to something along the lines of "you're using it wrong (tm)".

Maybe, but that wasn't what I said. The problem is that lots of people know C, but very, very few people know C++.

A big part of the problem is that both languages share one compiler, and people come from C thinking that C++ is just an upgrade with some features bolted on.

It's not. It's a completely different language, and if you approach it as "I'll learn a bit of C and then throw in some C++ features" you're setting yourself up for a world of hurt.

The hurt goes away if you forget C, start with a clean slate and learn C++ as a new language.


C++ has more historical baggage than any other mainstream language. Consider something as simple as initialization: https://www.youtube.com/watch?v=7DTlWPgX6zs


That's not true, and an example of it would be scoping rules. If you see 'x' in the code for a method, there are strictly more places that could come from than in say, Python or Gp, than in C++.


I have written a command-line log processor in Rust, an XML processor in Go, and some small microservices in both. My experience in working with both languages is that Rust has a steeper learning curve than Go, and it took me longer to feel proficient in Rust. However, once I felt proficient in both I felt much more productive in Rust.

It's my impression that you may need to have worked with functional languages and have an understanding of type systems before you can be efficiently productive in Rust, but for those who reach that I think they can be much more productive in Rust than Go,based on only my personal experience. One of Rust's biggest and best features, and its biggest barrier to grokability, is the Rust Borrow Checker.

I think Go is a very good language, has awesome concurrency designs (I love Go channels), and is more accessible to more developers, but it also doesn't seem as flexible. I think the OP hit the nail on the head with the idea that Go is designed for Enterprise, where a sea or journeyman engineers are working under a small number of senior eningeers. I think Rust OTOH is designed for senior engineers, but is usable by less experienced engineers if they are mentored properly, i.e. 1:1 or at most 1:3.

Also, I like the tooling better in Rust than Go. Rust tools are easier for me to install, and the cargo build tool is not part of the language, but there is a standard build tool, so I get to have my cake and eat it too. Meaning I can customized the build tool to my needs, and have different customizations for different projects. That can get out of hand, which is why so many hated Gradle. OTOH, the level of customization is why many shops with senior engineers loved Gradle (e.g., Netflix).

Examples of build tool install:

* Go: $GO111MODULE=on go get golang.org/x/tools/cmd/stress

* Rust: $cargo install cargo-stress

One thing I think both languages will need to watch out for is the P3 problem (Package Proliferation Pachyderm), where there becomes an ocean of packages, some too trivial to really be an effective package (e.g., a package with a single functor to add two numbers). I've seen this with Node.js, though the community has noticed it and is working to rectify the situation. This, like so many problems with language adoption and evolution, is a policy/community problem and not a technical problem. Both Go and Rust have great communities, so hopefully we can avoid P3 in the future.


I have zero field experience regarding go, but it seems to me that it's made to remove friction for teams.

Lean toolchain, lean build times, lean formatting (can you imagine the amount of time wasted on IDE config, commit syntax, and formatting style debates ?) so that large groups can just go to work.

ps: I'd love to work in rust. As you said, it seems very potent at making very expressive yet very efficient code.


I was trying to do some CLI formatting this weekend and discovered that an equivalent to leftpad exists in the Node API.

I was up to my elbows in troubleshooting (when you don't know how to solve the problem, describe it more clearly), so I didn't have time to dig into the history, but I got a chuckle out of that.


> Go is a better Java / C#, while Rust is not.

Mm, not really. Go is a less sophisticated Java / C#, in the same way that a bicycle is a less sophisticated motorcycle. There's situations to use both, and there's nothing with either, but sometimes you want or need to travel hundreds of kilometers in a day and a bicycle isn't going to cut that for most people.


I hate bad analogies that have us arguing more about how well the analogy fits than simply debating the underlying question. When we talk about concurrency or the ability to make simple yet high-performing server apps, it's ridiculous to say Go is a bicycle compared to the motorcycle that is Java / C#.

User-space network driver? The benchmarks group Go closer to Rust with Java / C# way behind: https://github.com/ixy-languages/ixy-languages

This would be a very complex comparison and simple analogies do us a disservice. I think the author's blanket statement that Go is faster than Java / C# also seems too broad.


> User-space network driver? The benchmarks group Go closer to Rust with Java / C# way behind

That's a very generous reading of those benchmarks. What you said is true in the latency benchmarks, but in the throughput benchmarks, C# _beats_ Go at high packet rates and is much closer to Go than Go is to Rust at low to medium packet rates.


How do you get that C# is way behind Go on this one? If anything, those benchmarks show these groups of languages performing at roughly the same level:

1. C/Rust

2. Go/C#

3. Java/OCaml/Haskell

4. JavaScript/Swift

5. Python


I agree that my wording is too strong due to my memory of the latency results. As you increase the load, Go is closer to Rust/C than C# (see last benchmarks) and at a given load, C# isn’t in picture. It’s fair to say that Go/C# is similar while Java is far behind depending on the benchmarks that are important to you.


> and at a given load, C# isn’t in picture

And depending on how you draw the picture, Go might not even be in the bandwidth picture.


The analogy is better than you intended. It's a lot easier to kill or injure yourself with a motorcycle than a bicycle, too.

The Java/C# vs Go equivalent being causing your project to run overbudget and a be a trainwreck of bugs due to overengineering and runaway complexity.


golang has nothing to offer compared to Java's introspection capability, performance, tooling, tunability, and even concurrency (see java.util.concurrent). Not to mention maturity and widespread adoption.

It's just an overhyped, subpar language.


Concise, clean, effective and without cruft. That's the main reason for Golangs unstoppable growth.

Java is a language and computing platform from the 90s and it shows. It really needs to just stop and the mess around licensing only serves to help kill it.

I also wouldn't tout the concurrency of Java over Golang - having the concurrency model baked into the language and runtime means a clear advantage for Go and one which simplifies the communication among different agents.

Please learn some basics.


I have used both in production, and Java/JVM is far superior.

Concise is not a word I'd use to describe golang. I can't count how many times I've come across code that would be 1 or 2 lines in Java compared to 15 lines or more in golang. It's quite ironic given the unsubstantiated claims around golang. Lots cruft with "if err != nil" littered everywhere is barely scratching the surface.

Java is getting a green thread implementation by means of project Loom. However, the JVM already handles very high concurrency systems in production by using libraries like Akka, Vertx, and Reactor. It is already used by major corporations to handle extremely high load systems. It's already proven itself over decades.

I don't know what you mean by Java being a computing platform from the 90s. If anything, golang is at a similar level as Java when it was first released (no generics), except worse (error prone error handling, poor design of interfaces, and many bad and unsubstantiated design decisions). Not to mention the JVM having state of the art GC, as well as performing optimizations way beyond what golang is capable of.

OpenJDK is fully open source, no licensing there. Not only that, but Oracle has open sourced previously closed source projects pertaining to the JVM and tooling around it.

The main reason for golang is the hype behind it. Proof is that some of the same authors worked on its predecessor a long time ago, and nothing ever came out of it because they didn't have the Google brand behind them. People today follow hype without substantiation.


You're arguing a lot of abstract points here but let's have a walkthrough...

> I have used both in production

likewise, and I have found the opposite stance for Java/JVM based applications.

Terrible deployment story, terrible amount of tweaking and JVM "hacks".

> Java is getting a green thread implementation by means of project Loom

Well done to Java. It's getting features already commonplace in Golang. Bolting on Akka, Vertx... yeah, enjoy your frankensteins monster lol.

> no generics

Generics _arent_ critical for a language, exceptions are a mess and as for the JVM being state of the art... yeah. In the 90s it was.

> The main reason for golang is the hype behind it

I just fundamentally disagree - if that was truly the case, I'd have adopted Java back when it was hyped and relevant and would've moved onto Rust by now from Golang which never happened.


> Terrible deployment story, terrible amount of tweaking and JVM "hacks".

A lot of people these days are building fat/uber jars. All it takes to run your code is `java -jar foo.jar`. Can't get much simpler than that.

Tweaking is a plus that golang doesn't offer. The JVM runs an extremely wide range of workloads, and gives you the ability to tune it accordingly, e.g. whether you care more about throughput vs latency. Compared to golang where this is extremely limited.

It's not surprising that a huge number of data processing workloads run on the JVM (regardless of implementation language).

> Bolting on Akka, Vertx... yeah, enjoy your frankensteins monster lol.

That's not an argument. These frameworks are mature and battle tested, not to mention built on sound principles (e.g. Akka is similar to Erlang's actor model, which includes supervisor capability and remoting - nothing like this exists in golang).

> Generics _arent_ critical for a language, exceptions are a mess

They're not critical in the same way "functions/procedures" are not critical, but you're going to end up with a messy code base when it comes to reality. Again, the number of times I've seen what amounts to map/filter calls littering the code base, making it more difficult to read (not to mention error prone) is too many to count. Generics fix this. Funnily enough, it's golang that chose to disregard best practices from the 70s.

> and as for the JVM being state of the art... yeah. In the 90s it was.

Tell that to Google, FB, Apple, Amazon, and many more who run their critical infrastructure on the JVM. The introspection and monitoring it provides is literally second to none, not to mention performance, tunability, hot-swapping, rich ecosystem, and many more.

> I just fundamentally disagree

You can, but the fact remains that some of the same authors worked on a golang predecessor which never went anywhere, precisely because it didn't have the Google name behind it.


but but... anybody can ride a bike and you don’t need to pay for gas. and it’s better for the environment. think about the new people joining the team.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: