Hacker News new | past | comments | ask | show | jobs | submit login
The Value in Go’s Simplicity (benjamincongdon.me)
368 points by Cthulhu_ 71 days ago | hide | past | web | favorite | 360 comments



One benefit of radical go-style simplicity that I haven't seen discussed much is that it forces you to focus on the task at hand. Like, you know that your code will look like shit anyway so it is pointless to fuss too much over it. Whereas many programmers using a more "clever" language like haskell will spend a lot of time trying to make code more readable, searching for the right abstraction, arguing whether something is a monad or some other gizmo. Most of it is wasted intellectual effort.

Everything in moderation of course. For me personally simplicity of go is too much and I don't feel comfortable writing it.


I agree that Go forces you to focus on mechanics of your code more so than some "fancier" languages.

However, Go's poor type system also forces you to write worse and less safe code than you want.

For example, the lack of enums and sum types means it's hard to represent, safely and efficiently, anything that should be the same type but have different values within a strict set of allowed values. The nearest you get in Go to an enum is:

  type Color int
  const (
    Red Color = iota
    Blue
    Green
  )
Alternatively, can do the same with a string:

  type Color string
  const Red Color = "red"
  // etc.
This gives you some type safety, since a Color requires casting to force an int or string or whatever to become it. But that doesn't give you real safety, since you can do:

  var c Color = Color("fnord")
This comes up whenever you're handling foreign input. JSON unmarshaling, for example. You can override the unmarshalling to add some validation, but it won't solve your other cases; it will always be possible to accidentally accept something bad. Not to mention that the "zero value" of such a value is usually wrong:

  type Color string
  var c color // Empty string, not valid!
A more safe way is to hide the value:

  type Color interface {
    isColor()
  }
  type colorValue string
  func (colorValue) isColor() {}
  var Red Color = colorValue("red") // etc. 
Now nobody can create an invalid value, and the zero value is nil, not an invalid enum value. But this is unnecessarily complicated and shouldn't be necessary in a modern language in 2019.

The case of genuine sum types is worse. Your best bet is to use sealed interfaces:

  type Action interface {
    isAction()
  }

  type TurnLeft struct{}
  func (TurnLeft) isAction() {}

  type MoveForward struct{
    Steps int
  }
  func (MoveForward) isAction() {}
There are some downsides. You have to use static analysis tools (go-sumtype is good) to make sure your type switches are exhaustive. You get a performance penalty from having to wrap all values in an interface. And if you're going to serialize or deserialize this (e.g. JSON), you will be writing a whole bunch of logic to read and write such "polymorphic" values.


Agree with this. There are couple of observations I'd make, though.

Firstly, "enums" using iota should always be defined as either

    const (
        Red Colour = iota + 1
        Blue
        Green
    )
or

    const (
        Unknown Colour = iota
        Red
        Green
        Blue
    )
to avoid the zero value problem.

Secondly, and this is a personal preference, I've really enjoyed not having to work with sum types. In practice other programmers seem to use them when a public interface would have been sufficient, and it's convenient to be able to do:

    // original package 
    type Position struct {
        X, Y int
        Rot Rotation
    } 

    type Action interface {
       Apply(Position) 
    }

    type MoveForward struct {
       Steps int
    }
    func (m MoveForward) Apply(p Position) {
        switch p.Rot {
        case Up:
            p.Y += m.Steps
            ... 
        }
    } 

    // second package wrapping the first
    type WarpToPoint struct {
        X, Y int
    }
    func (w WarpToPoint) Apply(p movement.Position) {
        p.X, p.Y = w.X, w.Y
    }


> I've really enjoyed not having to work with sum types

Your example of what you prefer uses (a shitty approximation of) a sum type in p.Rot.

(This is also the most basic possible use of a sum type; they are not only useful for enums, it's just to point out that even a large amount of "simple" Go code would benefit from them.)


I understand that p.Rot is a shitty approximation of a sum type, but it still works and almost certainly won't break anything since Go forces you to explicitly typecast. The important thing is that the list of possible actions wasn't sealed by the type system, which in the original example it was.

I want to reiterate that I am aware sum types can be useful. I just don't think they're useful _enough_ to outweigh being a footgun for calling code.


I would argue that this misses the use case of sum types, which typically don't have behaviour (or they'd just be interfaces!).

For example, consider an AST for a programming language. AST nodes don't have any behaviour (though they might have some methods for convenience, though nothing that implements behaviour). You want to do optimizations and printing and compilation and so on, but on top of the pure AST.


Enums exist specifically to be compared with one of their possible values, how can you have a zero value problem?


If the caller of your interface does not specify a value for your enum, they have implicitly specified the zero value. Whether that’s desirable behavior or not is up to you. For most clients, this behavior can be surprising if the zero value is a meaningful one (i.e. one that implies an intentional choice).

IME it’s useful to explicitly define the zero value’s meaning as “UNSPECIFIED”, to simplify the problem of trying to guess if the client intended to pass a zero value.


> Enums exist specifically to be compared with one of their possible values, how can you have a zero value problem?

Because there's not actually support for enums in go.

There's support for constant values, and automatically assigning sequential values to them. That happens to be useful for solving the same kinds of problems that enums solve, but they're not equivalent.


It probably amounts to the same thing, but I think there's a more pragmatic approach to thinking about safety. A type is a way to remember that validation has already been done. This is true for constants by inspection (code review). For dynamic code, have a validator function that takes unvalidated input and returns a Color or an error, and always use that to create a Color.

That's usually sufficient. Any "cheating" should come up in code review as a suspicious cast to Color. In an audit, you could search for casts to Color.

Safe languages often have unsafe constructs. It's the same principle. The unsafe code is signposted, and you review it.

If you want further encapsulation, another useful trick is to make Color a struct with a private field. It's not usually necessary, though.

Go does have an unfortunate quirk that you can always create a zero value without calling a constructor, so you'll need to make sure a zero Color has meaning. (An interface doesn't really change this because the then the zero value is nil. That's not an improvement over making the zero value mean "black" or "transparent" or "invalid".)


I know it’s not a proper part of the language, so it might qualify a “hack” solution, but one can get around some of the issues you describe by defining your enums in protobufs, and using/trusting the generated code, no?

It’s not a pretty solution from a language design point of view, but it’s been more than effective for us from an engineering point of view: since we’d need those proto definitions anyway, why bother writing our own?


Metaprogramming fixes everything. Why didn't they put in sum types in the first place? I think the designers probably didn't know about it at the time.


Given the linage of the designers they were pretty much aware of it.


The "linage" implies that they aren't aware of it, or at least they weren't when designing the language. Rob Pike is more of a systems guy than a type theorist.

Like most systems guys, they know of how C handles types and how C++ handles types. The concept of Parametric Polymorphism or sum types outside of OOP is most likely something Rob Pike was not familiar with as it's hard to see why sum types were not included in the language.

Having IO functions return a tuple of error and value rather then an Option type is not simplicity it's complexity arising from lack of a type primitive. The feature is ugly and very much looks like it was implemented by someone who wasn't aware of a type that can be a Value OR an Error. So instead he implemented a type that can be an Error AND a Value and left it to manual run time checks for the programmer to figure out if an error even occurred.

The other thing is that this "tuple" type of error AND value looks like a hack. Tuple types can't be saved to a variable in GO and can only be returned by functions and immediately unrolled into its separate primitive values. It's like Rob knew something was off in this area so he created a temporary concept of a tuple in the return value of a function to make up for it. A consistently designed language wouldn't have a Tuple only returnable by a function call. It seems strange and inconsistent.

Additionally, the fact that, in GO, some types can have Nils and other types default to a zero value implies that Rob Knew nulls were bad but didn't know how to completely get rid of the null.

I'm thinking that Robs initial notions of sum types and parametric polymorphism is that they can only be implemented via hierarchies of inheritance which in itself has many problems. It makes sense because this is what systems programmers are exposed to (C, C++) as opposed to typed lambda calculus or Haskell. So it's easy to see that Go is the result of Robs awareness of problems with OOP but lack of awareness of the theoretical alternative.


He was at the premiere computer science research organization for decades. And you presume that he's not familiar with sum types, because if he was, he would have included it, and therefore he can't have been familiar with it, because he didn't include it? That's the most amazingly arrogant thing I've heard in a while.

The fact is that Rob almost certainly knows more than you, rather than less. And he still made the choices he made. That should make you ask questions, not about Rob's knowledge, but about yours.


https://news.ycombinator.com/item?id=6821389

Read that quote by Rob and the responses. IN the quote rob describes what he believes Generic Types are at face value... he literally takes it into a tirade about inheritance and hierarchies of classes... something that is not part of type theory at all.

Honestly, it feels like he didn't know about Algebraic Data types. I'm not the only one who thinks this as shown from the responses to his quotation.

One of the responses:

"Or perhaps Rob Pike just hasn't explored the relevant literature in enough depth. At one point he admitted that he didn't know that structural typing had already been invented previously! This isn't to criticize Rob, I find his talks fascinating, I think he's awesome, he's a friend of my boss, etc. But he's hardly the first hard-core hacker to be ignorant of the degree to which type theory has seen dramatic advances since the 1980s."


Honestly this is really compelling evidence that Pike doesn't know much about type theory. That isn't terribly surprising, and the other early collaborators on the language that I know of also came from more of a systems background. I think its entirely likely that Go's crippled type system is partly an accident, and not entirely a design choice. It would be helpful if - with the benefit of hindsight - they would admit it, rather than invent post-hoc justifications for the way things are.


I don't know man. I feel like there are places I know more than Rob Pike almost certainly. Like, I don't know, most of functional programming. I seriously doubt he knows what indexed monads are better than me.

So, at the point they were creating Go, I think it's perfectly reasonable they had even less exposure to fp, and didn't actually know about these better solutions.


This feels like the opposite of an ad hominem fallacy


Well... those Bell Labs types were polyglots. They tried a lot of things in a lot of languages. Does that mean that Rob Pike knew about sum types? Not necessarily, no. But it gives you two possibilities.

1. Rob Pike spent all that time at Bell Labs, with all these CS experts, read and wrote all those papers, and never heard about sum types. That's... possible. It's not the way I would bet, but it's possible.

2. Rob Pike knew perfectly well what sum types were, and left them out of Go, because he thought they didn't fit with what he was trying to do.

To me, the second is both more charitable, and more in line with what I think Rob Pike's background and experience would have exposed him to. crimsonalucard obviously disagrees. He seems to think that sum types are so obviously the right thing that Pike could not have possibly not put them in Go had he known about them, and therefore he could not have known. And that is in fact possible.

But it seems to me to better fit with Pike's background, as well as with the principle of charity, to assume that he knew. And still he chose to leave them out.

Now, he could still be wrong. And we can discuss whether sum types are really a good fit for what Go is trying to do. But the assumption that he couldn't have known, or he would have done it the way someone else thinks he should have, is what grates on me.

For what it's worth, the Go FAQ (at https://golang.org/doc/faq#variant_types) says that they considered sum types, and didn't think they fit.


> But the assumption that he couldn't have known

Please read my initial assumption. In no place did I say he COULDN'T have known. Read it. I literally started the statement with "I think the designers probably didn't know" rather than "I know they COULDN'T have known." There is nothing to "grate" you here. I simply had an opinion and a guess, and you disagreed with it and decided to insult me.

What grates me is the assumption that I said it's 100% true that Rob Pike didn't know what a sum type was. I think it's very likely he didn't know. If he did know then I am wrong. That's all.

>For what it's worth, the Go FAQ (at https://golang.org/doc/faq#variant_types) says that they considered sum types, and didn't think they fit.

That FAQ should have been presented earlier in a cordial and civil way. If you did I would have admitted that my hypothesis was incorrect. Science logic and evidence rule the day and I try to not invest any emotion into any of my opinions. It's hard but I follow this rule. If the FAQ says he knows about it then he does and I am wrong. Instead you chose not to present this evidence and call me arrogant.

There was no need to call me "Arrogant." It disgusts me to hear people talk like this. Either way the GO the language feels awkward in the way it uses product types and does indeed feel like Rob didn't know about them because the sum types certainly do feel more fitting then having a function return a tuple out of nowhere.

I also disagree with the FAQ. Plenty of languages have constraint types that are placed on the subtypes of the sum type. There's no confusion imo. Also note that the previous sentence was just an opinion. Please don't call me arrogant because I have one.


Well, I didn't have the FAQ earlier. I was guessing then.

And I don't see the FAQ as necessarily total vindication of my position. The language team considered sum types; it doesn't mean that Rob Pike did in the initial design. It could be that, after it was kind of mostly formed, they thought about sum types and couldn't find a sensible way to make them fit. Or it could mean that he considered them and rejected them from the beginning. The FAQ isn't specific enough to say.

As for calling you arrogant: You are not the first person who has said, here on HN, that Pike "looked like he didn't know"/"must not have known"/"couldn't have known". Those conversations kind of run together in my mind. As a result, I was hard on you at least in part because others went too far. That's not fair to you, and I apologize.

I also cannot call you arrogant for having an opinion. I also have one - you may have noticed this. ;-)

However, I feel that I should say (and say as gently as I can) that you often sound very harsh on HN. A harsh tone causes many to read your content with less charity than the ideas might deserve. (I am not here trying to defend my interaction in this thread.) And this is not very helpful of me, because if you ask for advise on what, specifically, to change, I'm not sure I can give any. I mention it because you may be unaware of it, and awareness may help.

I can easily see how the previous paragraph could offend you. I am not trying to do so. Forgive me if it causes offense.


> To me, the second is both more charitable, and more in line with what I think Rob Pike's background and experience would have exposed him to.

More charitable to Rob Pike, rather than to the person you're in the the middle of a conversation with.

Anyway "appeal to authority" is not an argument, it's a religion. Our lord and savior, Rob, knows so much that his design decisions are beyond question.


Not only is it likely that, as you say, Rob Pike was aware of sum types, but he did not create Go on his own but it was created by a small team. Someone like Robert Griesemer, who studied under Wirth, would have known about them, if the others hadn't.

I have been using the Wirth languages a lot (Pascal, Modula), and one of the big appeals of Go to me is, that it brings back a lot from those languages to the modern times. The Wirth languages are far to underrated in programming today.


The irony being that most Wirth languages are more expressive than Go will ever be, with the exception of the first release of Pascal and Oberon versions, and the follow up on minimalist design approach with Oberon-07.

When Go came out, I though it could follow Oberon, starting small and eventually reach Active Oberon/Zonnon expressiveness, but alas that is not how they see it.

Even Limbo has features that Go still lacks.


It’s called “appeal to authority”: https://en.wikipedia.org/wiki/Argument_from_authority


The name of this fallacy is appeal to authority.


Having the value returned together with the error is convenient for a couple of reasons.

First, it's often possible for the function to return a meaningful value even in an error case (e.g., number of bytes read before the error occurred).

Second, it's often possible to return a sensible 'null' value together with an error which can be handled correctly without checking the error value. (A map lookup is the obvious example of this.) This simplifies logic in some places.

Using sum types for errors in Go wouldn't actually work very well unless you fundamentally changed other aspects of the language. You'd need pattern matching, a whole bunch of generic higher order functions for manipulating option/result types, etc. etc.


>First, it's often possible for the function to return a meaningful value even in an error case

Create a type that explicitly stores this information. The return type can hold the (error message and a value) OR (just a value). This type expression is a more accurate description of what's really going on. Whichever way you want to represent the product type it's not isomorphic to the actual intended result that the function should return. A sum type can represent the return value of the sentence below while GO cannot:

"A function that returns (a value) OR an (error with a value)"

This is the true intention of the function you described.

>Second, it's often possible to return a sensible 'null' value together with an error which can be handled correctly without checking the error value. (A map lookup is the obvious example of this.) This simplifies logic in some places.

But opens up the possibility of a runtime error if you fail to check for it. Historically there are tons of functions in C, C++ or javascript that use null to represent an error and it is self quoted to be the greatest mistake ever made by the creator of null. No language needs a null.

>Using sum types for errors in Go wouldn't actually work very well unless you fundamentally changed other aspects of the language. You'd need pattern matching....

Using product types to represent errors has already changed the nature of GO in a very hacky way.

Only Functions in GO can return tuples and the concept of the tuple can never be used anywhere else. You cannot save a variable as a tuple, you cannot pass a tuple as an argument. You can only return a tuple then instantly unroll it. It's an arbitrary hacky feature obviously made to support error values.

It would be better to have an arbitrary pattern matching feature... this makes more sense then arbitrary tuples returned from functions.

>a whole bunch of generic higher order functions for manipulating option/result types, etc. etc.

Actually no you don't. The fact that go functions return tuples with errors, does this mean that higher order functions need to handle tuples? No! not at all. In fact go explicitly eliminates support for this... The tuples in GO need to be unrolled into their constituent types before they can be used in any other function. The same concept can be applied to Option types. You have to unroll the value and explicitly handle either individual type. You do not ever need a higher order function that accepts the Option type as a parameter.

Like all languages that have the Option Type/Maybe Monad etc... Any function that returns this type represents a function that is impure that needs to be unrolled first before passing the value down to the functions that do closed and pure calculations. A function that takes an Option type as a parameter is a function that says "I am a function that can only take values from IO" It's very rare for functions to be implemented like this even in languages that have first class support for the sum type and Monads. In haskell I can't recall ever seeing a function that takes the IO monad as a parameter. In haskell and in Rust these values need to be unrolled into their constituent types before they can be used.

Please note I am not advocating the inclusion of Monads into GO. Just talking about sum types.


To me, Go is "masturbation prevention" language. Meaning that certain deliberate design choices were made to prevent precisely the types of endless unproductive masturbation you see in some other languages. I.e. type masturbation in Haskell or OOP/IoC masturbation in Java (particularly egregious, Java is not a bad language otherwise), or metaprogramming mastrurbation in C++. The omission of these features is not a bug. It's a feature in itself.


Indeed. More than once, I have seen program projects ruined by creating an overly elaborate class hierarchy, sometimes 10 layers deep. Just to express any theoretical aspect of the domain in the structure of the class hierarchy. Java programs often suffer from this. Which is especially sad, as Java has interfaces, which I think are the right way of representing abstract types for APIs for example. Interfaces don't force you to into a type inheritance just to fulfil a contract. But unfortunately, they are way to rarely used.


If anything in Java they're overused. You often see only one class implementing an interface where the programmer can reasonably expect there will never be another implementation, and where it's not exposed outside the API boundary, so the interface is gratuitous.


For json unmarshalling of structs at least, I've become a pretty big fan of the validator library from the go playground:

github.com/go-playground/validator/v10

It uses struct tags to validate the struct and is quite extensive.

https://godoc.org/gopkg.in/go-playground/validator.v10


  type Color struct {
    R, G, B, A byte // IDK
  }
  func (o OtherType) *Color {
    return &Color{R: o.r, B: o.b, G: o.g}
  }
  type Colorer interface {
    func Color() *Color
  }
A colorer would return a color regardless of its type. This is behavior-based interface. I just need a thing that when I call Color(), you get a *Color.


That's not an enum or sum type, though, and misses the point of my example. For colors, sure, you can use a structural type to represent RGBA, but that wasn't what I was trying to get across. What if the set of possible values cannot be described as scalars? The other example with "actions" demonstrated this problem.


I want to see generics and real enums (I.e., Rust enums, not C/Java enums) added to Go, but as far as it being unsafe or inefficient, these concerns are overblown for many apps. People who levy this criticism are often fine with Python and/or JS for similar categories of applications, even though they are far less safe and less performant than Go. We should be clear when we criticize Go that we’re talking about addressing the last 1% or so of type-related bugs and/or extending the performance ceiling a bit higher. We should also give Go credit for permitting a high degree of safety, performance, and productivity when other languages make you choose one.


Java enums are much more powerful than plain old C enums actually.


I’m aware, but it’s irrelevant.


I find the opposite is true. It too often means the focus is on fiddly and tedious book-keeping - instead of writing some code that says “please do a thing”, I write some code that says “please perform these 20 steps to do a thing in excruciating detail even though you are much better at deciding how to do this than I am”. It’s noise that detracts from the readability of the code far too often for my taste.


I agree with this 100%. Go is great for micro-readability: "what do these ten lines of code do." Go is horrible for macro-readability: what does this module do, what does this service call do. If you compare a fixed number of lines of code, I wouldn't be surprised if Go always wins out for readability. But if someone says, "Figure out the business logic behind this feature implemented in Go," get ready to spend a lot of time scrolling through low-level code.

I always thought code written as page after page of low-level details was bad code. I thought the same thing about code written as class after class after class of OO hierarchy. But people talk about Java and Go as if it's impossible to write unreadable code in them. I don't think code has to contain a single hard-to-understand statement to be "unreadable." After all, code that is unreadable because of abuse of powerful language constructs isn't literally "unreadable." You call it that because reading it requires an unreasonable amount of time and effort. The same thing can (and should) be said about code that requires an unreasonable amount of effort for any reason.

To me, it's just different ways that programmers can waste your time. One programmer might waste your time by combining powerful language features in cryptic ways; another might waste your time by hiding crucial structure in a vast sea of details. What's the difference?


>I always thought code written as page after page of low-level details was bad code.

I completely agree with this. The best code is code you don't have to read because the structure of the code makes navigation easy and functional boundaries obvious. A language that doesn't provide strong support for declaring functional boundaries results in code that is much harder to read because you have to comprehend a lot more of it to know what's going on.


Oh my, I didn't even thought of it, but you are right.

Few months ago I started working on a Go codebase. Yes, while the language is simple, you can absolutely make code confusing. If you wrote code yourself it is obviously simple to you, because you know the structure. But it can be a nightmare to someone else who needs to learn the structure and only has the code.


> But if someone says, "Figure out the business logic behind this feature implemented in Go," get ready to spend a lot of time scrolling through low-level code.

I've found "go doc" amazing for this use-case; I only ever trawl through the source-code for a high-level understanding as a last resort - usually because the code is undocumented or under-documented.


Bad Go code can easily have this property. But good, elegant, well-structured Go code absolutely does not.


This has absolutely been my experience as well. Golang is good at being fast, but to say that it helps write better code because of its missing batteries/features is just silly I think.

"I want to interact with a REST API and pull a field out of its response JSON" is an incredibly common workflow, and yet to do that in golang is far from trivial. You need to define serializer types and all sorts of stuff (or you can take a route I've seen encouraged where people to use empty interfaces, which can cause runtime exceptions).

Same deal with a worker pool. Concurrency is great, but instead of providing a robust, well written solution as part of the language itself, it gives you a toy like this https://gobyexample.com/worker-pools (still the most common result on Google) that is only 80% of the way there. Then you find yourself bolting things onto it to cover your features (we need to know if things fail, so let's just add another channel. We also need finality, whelp, another channel it is), and before you know if you have an incomprehensible mess.


> "I want to interact with a REST API and pull a field out of its response JSON" is an incredibly common workflow, and yet to do that in golang is far from trivial

    // Interact with a REST API and pull a field out of its response JSON.
    func interact(url string) (field string, err error) {
     resp, err := http.Get(url)
     if err != nil {
      return "", fmt.Errorf("error making HTTP request: %w", err)
     }
     defer resp.Body.Close()
    
     if resp.StatusCode != http.StatusOK {
      return "", fmt.Errorf("error querying API: %d %s", resp.StatusCode, resp.Status)
     }
    
     var response struct {
      Field string `json:"the_field"`
     }
    
     if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
      return "", fmt.Errorf("error parsing API response: %w", err)
     }
    
     return response.Field, nil
    }
IMO this is a good level of abstraction for a language's standard library. Each of the concrete steps required for the process are simply and adequately represented. Each can fail, and that failure is idiomatically managed directly and inline, which, when applied to an entire program, significantly improves reliability. If you find yourself doing this often, you can easily write a function to do the grunt work. Probably

    func getJSON(url string, response interface{}) error
> Same deal with a worker pool. Concurrency is great, but instead of providing a robust, well written solution as part of the language itself, it gives you a toy like this

Go's concurrency primitives are very low level. This can be bad, but it can also be good: not all worker pools, for example, need the same set of features.


Some of us need the intellectual delight to make the work bearable. If I can view my code as some kind of art, constructing great abstractions, it helps me forget that I’m spending a huge portion of my life logging, aggregating, and analyzing internet clicks.


The problem is you’re delighted by the code but everyone who comes after you can’t stand it.


You don't know that the word "great" in this case doesn't mean "simple, elegant, and as minimally abstracted as necessary"


It might not have been the intention of the original poster, but "great" to me implies pretty much the opposite of minimal abstractions. But perhaps I am just burnt by experience :)


Consider trying to get a job in a part of the tech sector doing meaningful work. Just because the best minds of our generation are squandering themselves on adtech doesn't mean you have to do the same.


You should view this condition as something to fix in yourself. Take joy in doing these things with precision and efficiency, and making your work easy to understand and explain to others. It's hubris and, frankly, rude to subject your professional colleagues to your artistic expression.


"making your work easy to understand and explain to others"

This is the hallmark of a professional (very likely mature and senior) team member. Nothing to prove and interested in a maintainable project.

There are times for clever, no doubt. Every rule has an exception.

Bill (William) Kennedy at Ardan Labs has a line he uses in his talks: the bottom level developers need to grow and come up and the top level developers need to avoid cleverness and come down and everyone meet in the middle.


Look bud, not all of us want to take delight in being an easily replaceable automata. It ain't a personality flaw.


So you’d take joy in making your coworkers’ lives hell instead?

I understand the kind of joy-of-expression that lives in things like https://poignant.guide/dwemthy/, but if you attempt to put that in a production codebase, I’m not gonna be at-all positive in code review.


Great, I'll let every reviewer of my code know that they've been doing it wrong.


Are you saying that you actually write the kind of code that I linked? Because my expectation was that you’d look at it and say “well, my code certainly doesn’t look like that. It’s quite reasonable in comparison, in fact.” I don’t think I’ve ever seen anyone write code like Dwemthy’s Array in a project that has to “do something productive”, even if they’re the only one working on it.


Maybe the sort of code you linked isn't the sort of code we're talking about? Nobody is running around bigco writing demoscene stuff or whatever.


Doing the things I said well makes you an irreplaceable member of a team, worth your weight in gold.

Save art for the canvas, for your weekends, for your loved ones. Bring a professional self to your job.


Not sure why you seem to think code written with feeling behind it is some unmaintainable mess. That's been the opposite of my experience.

The way people fail in this line of work, if they have skill, is burnout. Burnout is the thing that'll get you. So you do whatever you can to stave it off - and that requires working on something you actually care about in some way.


We aren't talking about "code with feeling behind it" but rather "code as some kind of art".


You might have a different idea of what art looks like. Well designed abstractions are elegant, conceptually simple, and not-leaky. These tend to make code more maintainable and easier to comprehend.


One of the things you learn after writing code long enough, is that there is no such thing as a perfect abstraction, or even a non-leaky one. Eventually you run into edge cases, either in performance or functionality, that causes you to add warts to your abstraction.


Stipulating that no abstractions are perfect shouldn't be an excuse to abandon the entire notion. There's still a gradient of more or less elegant and flexible abstractions.


This definition of art, though not wrong, is so expansive as to be meaningless, especially in the context of this discussion.


> Take joy in doing these things with precision and efficiency, and making your work easy to understand and explain to others.

If you think this is different than what the parent is describing then you’re doing it wrong


"Code as art" implies a strictly different set of criteria than the ones I listed. If the Venn diagrams overlap a lot for you, that's great, but it's rare.


I think you have a very specific, and not widely shared, definition of “code as art.” Code as art does not mean code full of pointless Rube Goldberg mechanisms or following some esoteric golden ratio whatever. For me, “code as art” means code which is well-abstracted, readable, correct, concise, maintainable, extensible, well-documented, performant, etc — I.e. reflecting the things that matter to me as a developer. The process of getting to the point where the code has all of those things, or as many as possible, is indeed the “art” of coding. To assume that the result is some horrible morass of spaghetti that no coworker wants to read is a strange one for sure.


The thing I detest about discussions of code aesthetics is the idea that the quality metrics you speak of have such a direct relationship to the "product features" of the language, that we can simply know it's good by looking at it, and we are hapless simpletons unable to write this so-called "beautiful" or "clean" code if we do not have the feature available. That is all bullshit. Most of the features are shiny baubles for raccoons and magpies, I do NOT know what good code looks like(I can only state whether the coding style eliminates some class of errors), and what matters the most is the overall shape of the tooling.

Some languages have a big bag of tricks, other languages let you extend them to the moon, and still others make you work at it a little. In the end it's all just computation, and the tool choice can be reduced to a list of "must haves" and "cannots". If you need more expressive power -- make your build a little more complex and start generating code, give it a small notion of types or static invariants. It only has to generalize as much as your problem does, and that leads you to build the right abstraction instead of dumping an untried language feature on the problem in the hope that it is a solution.


Your definition of art is essentially synonymous with good or elegant, and therefore not really useful in this discussion.


What is your useful definition of art that is useful in this discussion?


As for writing code, I write it cleanly and with proper, clear language in-code commenting for ME. Because I need to go look back at what I've done and why often.


I used to be a Haskell type and now enjoy Go greatly for this reason.

There was a thread on the Rust reddit where someone was asking how to do something relatively simple using some elaborate combination of map/reduce/filter/continuations/who-knows-what, and someone said "just use a for loop", and the OP was enlightened.

People don't know how great the burden of trying to model their problem to fit a fancy language is until it's gone. I didn't.

I want generics and sum types, but I miss them less than I would have predicted.


This topic is more complicated than “for loops good, iterators bad.” I absolutely agree there’s a time and place for both; that’s why we included both in the language. But sometimes, iterators have less bounds checks than for loops do, so they can be more performant than a loop. Sometimes they’re the same. Depending on what you’re looking for, the details of what you’re doing, and your literacy with various combinations, different ways of expressing the same idea can be good. It all just dependents.

(Also, rust’s for loops are implemented in terms of iterators; they’re actually the more primitive construction in a language sense; a while loop with the Iterator library API.)


Sure, and then you ask how to fold over a tree or an infinite stream, and the answer is to reimplement all the HOFs from the “fancy languages” in a type-specific way, because otherwise every user of your ADT is having to write not just a for-loop, but an entire push-down automata.

I also write Go code without missing generics, but that’s because I’m also fluent in other languages, and tend to use those when I want something ill-suited to Go, rather than trying to force Go into that shape.


> I also write Go code without missing generics, but that’s because I’m also fluent in other languages, and tend to use those when I want something ill-suited to Go, rather than trying to force Go into that shape.

I think this should be the main takeaway from people learning go - it's not suited for everything. Technically you can write "World of Warcraft" in pure assembly, but it doesn't make sense to do - you're using the wrong tool for the job. My problem is I hear a lot of people advocating for golang with a one-size-fits-all, theres-nothing-better, sort of mantra.

I have things I absolutely reach to golang for, but the sweet spot I've found is to re-implement a prototype I've built in some other language (like Python) when I need the speed. Trying to actually create new things in golang is tedious and I end up fighting the tooling more than most other languages (sans maybe C++ or Java).


> There was a thread on the Rust reddit where someone was asking how to do something relatively simple using some elaborate combination of map/reduce/filter/continuations/who-knows-what, and someone said "just use a for loop", and the OP was enlightened.

I think it's hard to discern between "that overly-complex functional and declarative definition is unfamiliar to me" and "that is way over-complicated and should just use a for loop".

Any chance you can track down that example so others can compare the two examples as well?


I’ve seen a similar effect in myself and others at work but I don’t think it’s only a symptom of the language. After we switched from java 6 to 8 a handful of devs including myself went overboard modeling problems to be solved with streams API when it wasn’t necessary. These days it’s leveled out and use of the api is on a much more appropriate level.

I think this is a process of learning. While learning a new tool you start to model problems so you can practice, even though not necessary. Once comfortable you realize when to use the tool and when not.


“Some internet person did it wrong once [in my view]” isn’t really an indictment of the whole thing


> I used to be a Haskell type...

Pun intended? If so, nicely done.


I'm just going to leave a direct quote from Rob Pike:

""" The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. """

I read that as Rob Pike saying he wrote go for idiots google hires to write.

https://bravenewgeek.com/go-is-unapologetically-flawed-heres...


Not idiots, but young coders who don't have 10-20 years of experience, who are required to write good code pretty quickly. So you want a language which is not only to quickly pick up but also quickly to learn to a point at which you are writing good programs.


Is the much touted "Make invalid states unrepresentable" consequence of garden-variety Sum Types wasted effort? Seems very good bang-for-mental-buck to me.


Joe Doffy argued, in my view persuasively, that Go missed an crucial opportunity by not requiring that users actually do something with returned error codes.

http://joeduffyblog.com/2016/02/07/the-error-model/#forgetti...

> It’s surprising to me that Go made unused imports an error, and yet missed this far more critical one. So close!

Result/Option/Maybe types force unwrapping, which makes ignored return codes auditable and allows you to manage technical debt.

This doesn't speak to Go's simplicity so much as it does to Go's conservatism. Having the Go standard library use sum types, establishing a precedent and a culture, would be no more complex in the absolute, but would have been more of a stretch for its initial target user base.


> It’s surprising to me that Go made unused imports an error, and yet missed this far more critical one. So close!

The most egregious to me has always been that unused imports are an error but variable shadowing is not.

Even in languages with dynamic side-effecting imports (like Ruby or Python) I've never seen a bug caused by an unused import. Not so for shadowing (don't get me wrong, it's a convenient feature, but if you're going to remove this sort of things because reasons it's a much bigger pitfall than unused imports).


Variable shadowing is actually a pretty clever thing that I'd like to see in other languages.

For example I often write code like this in Java:

    String ageString = request.getParameter("age");
    int ageInt = parseInt(ageString);
because I can't re-use name `age` twice and forced to distinguish between those names.

Now I agree with you about imports. I often want to comment a line and run program. Now I have to comment a line, run a program, encounter compilation error, find import, comment that import, run again. And uncomment two lines later. With Java my IDE optimizes import and removes all unused imports when I'm commiting my code. While I'm working on my code, I'm absolutely fine with any warnings. I would say even more: back then when I used Eclipse, it had awesome ability to compile even code with errors. This code just throws exception on runtime. But if I'm not really interested in that snnippet and working on other part, I can run it just fine. Probably that feature is the only thing that I'm missing from Idea.


> Variable shadowing is actually a pretty clever thing that I'd like to see in other languages.

Shadowing exists in most languages, the biggest difference being the allowed scope relationships between the shadower and the shadowee: most languages allow inter-function and inter-block shadowing (if they have actual block-level scope so e.g. not Python).

Intra-block shadowing is a much rarer feature, and one which Go doesn't have.


I agree with you. And it would even be fine to allow shadowing but require an explicit declaration to allow it to happen in a particular case. e.g.

    import "foo"
    func other() {
        shadow var foo string = "bar";
    }


Or only allow shadowing for `var` and forbid it for `:=`[0]. Though forbidding it entirely would work just as well.

[0] and go is actually weirder than that — and the opposite way 'round — as `var` doesn't allow any shadowing in the same scope:

    var a, b int
    var a, c int // fails because it redeclares a in the same block
while `:=` allows same-scope shadowing as long as the overlap is not complete:

    a, b := foo()
    a, b := foo() // fails because no new variable on the left side
    a, c := foo() // succeeds
both allow arbitrary shadowing in sub-scopes.


There's no shadowing in the latter case. The second case is the same thing as

    a := 2
    a = 3
By definition you must have nested scopes to have shadowing. Within the same scope, it's only ever assignment.


Well, in Rust you could do:

    let a = 2;
    let a = 3;
I think you would say the latter shadows the former...


Indeed, because semantically there is a syntactically implicit scope for every let binding. For example, in that case, the outer a is dropped after the inner a, just as if the second a had been inside of a block. There may be multiple syntactic ways to introduce a new scope.


That last one isn’t shadowing. := reuses a variable of the same name in the same scope.


I'm honestly of the opinion that this shouldn't be allowed either. Accept that you need to name the variable 'fooErr' and just ban all shadowing


I really like this syntax. I wonder if it has ever been proposed for Rust? It should be compatible with the existing semantics and could be phased in and then made mandatory in a new "edition".


It has, but hasn’t gained much traction.

There is already a “let” to show you that a variable is being created, adding more verbosity to a feature that, in some sense, is about removing verbosity kinda misses the point, in my opinion.

That said, never say never, but if I was a betting kind of person, I’d bet against it ever being accepted.


> This doesn't speak to Go's simplicity so much as it does to Go's conservatism.

I think this really hits the nail on the head. There are benefits to a conservative approach, but it's not the same as simplicity.


The errcheck linter is very popular for checking that you looked at every error return: https://github.com/kisielk/errcheck

I use it in most of my open source projects.


An indication that something is missing.


Not necessarily, insofar as “Go where you always use the (result T, err error) return type” is a dialect of Go rather than Go itself.

You could give this dialect a name and then maybe the compiler could enforce rules on projects that declare that they’re using that dialect (like C compilers do with dialects like “c19” vs “gnu99”), but it’s not strictly necessary; you can also just create “dialect tooling” that wraps the language’s tooling and adheres to those rules (like Elixir’s compiler wraps Erlang’s compiler.)

And a CI shell-script, or git pre-commit hook, that runs a linter before/after running the compiler, is an example of just such “wrapped tooling.”


> Most of it is wasted intellectual effort.

I would challenge this. I would say _some_ of it is wasted, but most of it works towards making the code more understandable. And code being understandable to the next person to read it (both in the small "what is this block doing" sense and in the larger "what is this algorithm doing" sense) is very important.


I find other people's go code is generally much more readable than other people's code in most other languages.

Go's simplicity and heavy idiomatic culture means all the code more or less looks identical. This is great for team projects.


Some of it is wasted, some is genuinely useful and to know which is which you need experience and good judgment. My point is that it is incredibly easy to fall into the trap of pursuing maximum code beauty and abstraction and wasting a lot of time in trying to attain some shining ideal, especially if the language is conducive to it.


I can't speak for the GP, but my interpretation is that the wasted effort comes from doing what you describe when, instead, one could have chosen a simpler language and produced understandable (albeit "ugly", "not eloquent") code the first time round. Then one can turn those intellectual wheels on a more interesting problem to solve.


The assumption that simplicity in a language naturally encourages understandable programs is a mistake IMO. Language complexity generally exists because the language is trying to shoulder some of the complexity that would otherwise go into your program. For example, Brainfuck is nearly the simplest language possible — you could write a full rundown of its features on a sticky note — but programs written in it are not very readable.

Even Go did this by adding async features into the core language. This is a complication that doesn't exist in the older languages it was intended to build on, but by building that complexity into the language, they reduced the burden of using it in your code.


> Most of it is wasted intellectual effort.

It is highly stimulating intellectual effort though. Sometimes I sit down and spend hours just thinking about the best way to do something. It's some kind of philosophy: the abstractions we create reflect the way we understand things. To write good code, we must study the computer science and the problem domain itself.

Without this, it's just boring mechanical work. Once the project has been figured out it ceases to be interesting. Some of my projects are unfinished because I can't justify spending more time on them even though I know exactly what must be done.


As someone who thinks readability is the¹ most important quality of source code, that makes me less interested in learning Go.

¹ Yes, even over correctness.


That's really strange to me because I find go to be very readable. There are so few approaches to each of the basic programming building blocks in go that once you have read a moderate amount of idiomatic go, everything else just feels easy.


There are far more readable languages in go. Go also encourages nested conditionals which can become a nightmare to trace when you're under duress.


Maybe I'm missing something but Go _discourages_ nested conditionals. There are even static analysis tools in the idiomic toolkit which tell you when your conditionals can be simplified.


No it doesn't, quite the opposite actually. Go favours "exit early" strategies


I'm only responding to my parent post's arguments. I don't know much about Go myself.


This philosophy is about as far from what I value as it's possible to get. An easily understandable program that doesn't solve the problem is worth absolutely nothing. In fact, if it's code someone is depending on it's probably worse than no code at all. In some applications it could even cost people their lives.

Correctness is a basic starting point. It's the minimum viable offering.


That's the most common opinion, and the one I used to hold.

Here is my counter argument.

Readable code that has some bugs is fixable. Because you can understand what it's doing and how to change it.

Working code that is unreadable is basically dead. No one can make changes to code they don't comprehend. The only thing it's good for is running it as is, much like a compiled binary.


That old adage, the best code is the code not written. Programs have to solve a problem to be worth anything.


I'm firmly in this camp as well. A strong code quality metric is how easily understood something is - we write code in all sorts of states of minds at all hours of the day. If others can read your code and understand it, it means you can too when it's time to extend/modify it. This is also why things like Ruby's over reliance on metaprogramming bugs me - sometimes duplication is fine, and I'd much rather have some duplication than a wrong (or hard to discover) abstraction.

The argument that "all code gets sloppy so let's just have very verbose code from the get-go" is pretty insane to me.


aesthetics are relative, not absolute.


Readability to me is about clarity, not aesthetics.

How easily can people understand what this code does and how?

This is at least conceptually objectively measurable, though I don't know of any actual attempts to do so.


That's important. But why I say relative - is because the structure and language of readability is dependent on the code culture one is working within, not an outside measure.

That's why things like C coding style is so variable - sometimes within the same body of code (see net-snmp ....)

I'd rather have consistent coding standards but I daily deal with different team projects with different conventions, so I'm used to adapting my own reading conventions as I switch contexts.

Keep a common aesthetic within a project. It's worth it - by measure of success of a project.


This might be bad for maintaining a lively argument in this thread, but I fully agree with that.


Many programmers using a more "clever" language like haskell will spend a lot of time trying to make code more readable, searching for the right abstraction. Most of it is wasted intellectual effort.

I rather disagree that making your code readable and maintainable is a waste of effort.


When I used to work on PHP, we had a similar appreciation for this, and called it “the joyless programming language” (as a complement)


Code is helped by being readable.

Saying your code is going to "look like sh£t anyway" seems rather defeatist, and an _excuse_ to write unreadable code.


Strong agree. I feel like the opposite is true of the Rust community. I follow a lot of prominent people in Rust and all they tweet about is intricacies of the language and new features/libraries. I'm not sure these people are even building anything, they seem to be "snacking" on the language only.

You don't see people in Go doing this, and it gives the impression that the community is small, but I think they're just building stuff.


People build libraries to solve real problems. Language features need to have proper motivations in order to be accepted; we have rejected more academic features that don’t have direct uses. For example, async/await solved a real pain point for our largest production users, and that’s why everyone has been talking about it.

Sometimes, these connections can be unclear from the outside. For example, there’s a lot of talk about “generic type constructors” and “generic associated types”, which sounds academic. However, it’s something the compiler needs to understand in order to implement a very simple user-facing feature: async functions in traits. From the outside it may look like “oh those folks are out of touch” but it’s directly connected to real user needs.

(As a further aside, these two features are identical, but “generic type constructors” focuses on the academic, and “generic associated types” focuses on the end-user benefit; we changed out terminology here specifically to focus on what users get out of the feature rather than the type theory implications.)

Furthermore, some people tweeting about things they’re excited about does also not preclude others who are heads down all the time. You wouldn’t see them for the same reasons, they’re not tweeting.

These kinds of swipes against other languages lower the discourse and promote animosity when there really should be none. I’d encourage you to consider if these kinds of attitudes help bring about more people who are interested in building cool things, or fan flame wars that distract folks from doing exactly that. Every minute spent arguing over whose language is better is also a distraction from building cool things as well.


What are some examples of features Rust has rejected for having no direct use? (I’m considering doing some language hacking just to learn the more arcane aspects of compiler theory, and it’d be nice to have a list of “exotic features you won’t usually find in a language because they don’t do much to help people” to explore.)


There was a contingent of folks who argued that we should not build async/await, but that we should instead build a generalized effect system, or figure out monads and do notation, because async/await is a specific form of those things and we should wait until we can get the more general feature first. Higher kinded types is sort of in this space, GATs will provide equivalent power someday...

We rejected a proposal for dependent/pi types; we’re still adding a limited form, and may get there someday, but we didn’t want to go fully into them at first because the difficulty was high, and the benefit less clear, than just the simple version. (Const generics)

There’s a few other other features that we had and removed too I can think of off the top of my head. We used to use conditions for error handling. We had typestate.

There was a battle over type classes vs ML style modules, type classes (traits) won in the end. That doesn’t mean modules are useless...

I think the answers to these questions are very relative to your langauge’s values and goals. All of these features have good uses in other languages, but couldn’t find a place in Rust for a variety of reasons. Your language should be different than Rust, so you may find some of these features useful, and not find some of ours useful.

I would encourage you to read TAPL, I think it would help with what you’re trying to do. Oh and check out https://plzoo.andrej.com/


> People build libraries to solve real problems.

Not always, some Rust community members in fact don't build any sort of applications with Rust at all, and only build libraries. I think it's reasonable to be skeptical of this.

I think the community can do better by promoting more talks involving applications written with Rust. As an outsider this is a puzzling omission to me as there seems to be more people using Rust than there is Firefox and Cloudfront engineers, so I'd like to learn more about where it's actually used.


> I follow a lot of prominent people in Rust and all they tweet about is intricacies of the language and new features/libraries. I'm not sure these people are even building anything, they seem to be "snacking" on the language only.

This is obviously false. How do you square that with all the Rust code we've shipped in Firefox, for example? I build things in Rust every day.


I didn't state that no one builds anything in Rust so I have no need to square it.


>>> I follow a lot of prominent people in Rust ... I'm not sure these people are even building anything, they seem to be "snacking" on the language only. - Touche

>> This is obviously false. How do you square that with all the Rust code we've shipped in Firefox, for example? I build things in Rust every day. - pcwalton

> I didn't state that no one builds anything in Rust so I have no need to square it. - Touche

Touche!

You are technically correct I suppose. You didn't state no one builds anything, but arousing suspicion that "a lot" of prominent Rust people aren't building anything and are just "snacking" on the language is pretty pointed rhetoric with an obvious purpose.

You could start up a Programming Language tabloid with a headline like:

EXPOSED: PROMINENT RUST PROGRAMMERS CAN'T EVEN WRITE IN RUST!

And really that's all before analyzing the line of logic of "people tweeting only about interesting language features and not their personal projects, public work projects, or private work projects implies they might just not be building anything at all" which seems pretty flimsy at best.


A lot of prominent Rust people, in fact, aren't using it in production. The Rust community is large and not everyone works for Mozilla or Cloudflare. I'm not going to call these people out by name because that would be a mean and pointless thing to do. I'll just point at that the community size to known production-uses ratio is not encouraging, to me at least.


Yes, this! If you walk slow, then you walk in a straight line to the destination.


Okay, so there's definitely simplicity. I agree that this is in many ways a nice change from some of the more popular languages, which can get a bit complex and heavy, with a focus on features that are nice in isolation but add up to a surprisingly difficult architecture to comprehend at the global level. I just don't think Go gets the balance right.

There are some parts of the language that are such a joy to use – C interop is simple and elegant, the concurrency story is great, the standard library is great, and in contrast to some other people, I think the error handling is also a nice example of simple and effective design. The wider tooling system is decently usable (with the exception of gofmt, as mentioned in the article, which I think is the single best tool I've ever used in any language).

But "simplicity" in Go-land seems sometimes to be the enemy of expressiveness. The lack of abstractions over operations like filtering, mapping, or removing items from a collection is incredibly irritating for such a common operation – instead of immediately obvious actions, there are tedious manual loops where I end up having to read like 10 lines of code to figure out "oh this just removes something from a list". The use of zero values is a crime against nature. The type system is shameful in practice compared to other modern languages, with so much code just being "just slap an interface{} in there and we can all pretend this isn't a disaster waiting to happen". It feels like such a lost opportunity to exploit a richer type system to eliminate whole classes of common errors (which are predictably the ones I keep making in Go code.)

I guess it's frustrating that a language—which is relatively well-designed and intended to be simple—too often makes things more complicated overall by making fallible humans do things that computers are better and more reliable at. I'll keep using it, because it fills a really nice niche that no other language is quite suitable for. But I'll keep being a bit annoyed about it, too.


> C interop is simple and elegant

C interop in go is super slow: https://github.com/dyu/ffi-overhead


Which isn't a contratiction to the statement, that for the programmer the C interop is simple and elegant. Which I think mostly it is. The slowness comes from what goes on behind the szenes. Mostly the different way of handling stacks between Go and C creates quite some effort when calling into C and returning. Those languages, which use "C compatible" stack and register layouts get much faster C calling. That doesn't mean, they can call C as easily.

So calling C from Go for small functions isn't a performance gain. You should write those in Go. The calling of C is great for linking against larger libraries, which cannot be ported to Go. And for that it works nicely and is fast enough.


Yes, this would be the point I would make too.

I find I use Go quite often to build relatively compact tools and services that need to use some features of a fully-featured and popular C library to perform some complex function that would be expensive and time-consuming to implement.

A recent example of this is using `libusb` and `libgphoto2` to enumerate DSLR cameras attached over USB and capture images from them in response to some other events, with a small web UI on top. It's maybe a few dozen lines of Go to call some enumeration and capture functions, and then I get a nice byte slice with all the data I want. There is minimal friction, the interaction is clear, and any performance cost is worth paying because of the simplicity of the interaction.

It's entirely true, and a well-known caveat, that the C FFI is slow. This makes it inappropriate for some use-cases, but entirely suitable for others.


> And for that it works nicely and is fast enough.

It may be fast enough for you, but it certainly isn't for many other people. golang will for example never grow a large mathy/scientific ecosystem, because of it.


Note they never say it is fast, or good.


I don’t know what the Go designers think about this, but I can at least appreciate the tradeoffs of not having map/filter/etc. Memory allocations remain clear and explicit, errors are handled consistently, concurrency is explicit, and it dramatically reduces the urge to write long chains of over-complicated functional operations.

Sometimes I run into a situation where I’m like “Sigh, filter would have been nice here.” But it’s pretty rare. On the other hand, ”clever” programmers love to make incomprehensible messes out of functional constructs.


That is the justification I usually hear, but I don't buy it. Like, 99+% of the time, I want to use these kinds of operations to manipulate lists which are orders of magnitude away from anything even approaching a scalability issue. I just don't care about memory allocations when trying to manipulate something like, say, a list of active users in a real-time chat or something. And often I find that the mess coming out of having to implement those same operations without expressive constructs is worse than the messes that people can create (though I grant I've seen those errors too).

This hints at some of the ideas behind Go – it's designed, perhaps, for Google-scale software. This is dealing with problems (like e.g. memory allocation) that I don't have when working with most datasets I'm likely to need. Maybe we just have to accept that.


> I just don't care about memory allocations when trying to manipulate something like, say, a list of active users in a real-time chat or something.

This is exactly the kind of thinking that the Go language pushes back against.

> Maybe we just have to accept that.

I think so. At least for now. I recently watched a talk by Ian Lance Taylor that made it very clear to me that generics are coming (https://www.youtube.com/watch?v=WzgLqE-3IhY). When we have generics then map/reducer/filter will absolutely be introduced as a library at the very least.

> This is dealing with problems (like e.g. memory allocation) that I don't have when working with most datasets I'm likely to need.

I don't think that's exactly it. It's more about runaway complexity. You might use these primitives to perform basic operations but other people will misuse them in extreme ways.

Consider this: suppose there was a built-in map() function like append(). Do you use a for loop or a map function? There'll be a performance trade-off. Performance conscious people will always use a for loop. Expressive conscious people will usually use map() unless they're dealing with a large dataset. This will invariably lead to arguments over style among other things.


For loops violate the https://en.wikipedia.org/wiki/Rule_of_least_power. Because they could do anything, you have to read each very carefully to find out what it's actually doing (which may not be what was intended). Flat-map and filter are more concise and clearer, and if my platform makes them slower that's an implementation bug I should fix.


In practice, reading a for-loop has been less problematic for me than reading the incantations of a functional programmer who’s been reading about category theory.

I know all about the virtues of functional programming patterns, and use them in personal projects, but in my day job working with dozens of engineers in the same codebase, I appreciate not having to decode the idiosyncrasies of how each engineer decides when and how to use higher order constructs, and the subsequent performance, operational, and maintenance implications. It’s a lot easier for me to just read a for-loop and move on with my life.


Before generic is supported officially, one can use code generation to get that effect.

I did that to typescript, should be applicable to go as well. Ref: https://github.com/beenotung/tsc-macro


These properties depend on the language; iterators do not have to allocate, can still expose error conditions, and make concurrency explicit.

I think this is sometimes why it’s so hard to compare and contrast languages; even surface level features in two different languages can have two very different underlying implementations, which can mean you may like a feature in one language and dislike it in another.


I don't think I used interface{} in the last 2 years of using Go, the only case was unknown json object that went into map[string]interface{} that's it.


I agree with you on most points, and they are working hard to fix the generics issue in a way that does not make you lose all the nice things you mentioned. The only part I didn't get is:

> The use of zero values is a crime against nature

Can you elaborate?


This one is specifically the assumed values of some types when declared in a struct or variable. Like, a struct with a string field gets an empty string value by default. I appreciate the reason for it; it just jars horribly with my expectations.

The worst offender is `Time` – to quote from the documentation:

"The zero value of type Time is January 1, year 1, 00:00:00.000000000 UTC. As this time is unlikely to come up in practice, the IsZero method gives a simple way of detecting a time that has not been initialized explicitly."

That seems to be an absolutely baffling decision.


I think the rationale goes something like this: how do you create a fixed size array? Of structs? Where one field is another fixed size array of timestamps? And on the stack?

Without some kind of default value, you end up with a lot of nested constructors and loops initializing array elements and fields, versus just zeroed out memory.

C++ does the constructor thing but it seems complicated and finicky when you don't have a zero-arg constructor. When I looked at Rust array initialization it looked somewhat limited, but maybe I missed something.

I'm not sure if every type really needs a zero value like in Go, but it seems like standard types like timestamps should, so you can use them this way?


Array initialization is a bit awkward in Rust right now, it’s true. It’ll get much better in the nearish future.


Initially I found it quite baffling, that slices or hash tables don't need initialization to be valid, but long term, I find this brilliant. Especially for slices, that a zeroed out slice struct is a valid empty slice is very clever and simplle.

I also don't see what your problem with time is. Zero is one set point in time. Not sure, why you want an "invalid" timestamp. Does make as little sense to me as an "invalid" integer. Why would you need to check that a time value has been explicitly initialized? And for most purposes, the zero value pretty much would express that anyway. If not, you can wrap the time value into a struct with a nonpublic initialized bool slot. But I really would like to know what your use case for this would be.


Generics is only part of the issue. Go has a crippled type system.

Json in go cannot be represented as a type and also is not type safe. This is 100 percent because go is missing a basic type primitive.

Json contrary to what many people think is not untyped. Json is represented by a recursive sum type which go has no ability to represent.

I think what caused this issue is more because the designers of go are more systems people rather than language theorists.


> Json in go cannot be represented as a type and also is not type safe.

Not a Go programmer, but why don't you simply use an explicit type tag and a number of getters? They can assert that they're not called in invalid situations.


While there is certainly something to be said for simplicity in a language, I think Go is too extreme in this regard. What it gains in simplicity it looses in expressiveness. This leads to more lines of code per feature, and bugs in software has been shown to be a function of LoCs [1,2,3].

One example of Go's lack of expressiveness is that loops are not abstracted away, e.g. you can't use higher level constructs like map and filter on collections. There are other languages that IMO strikes a better balance between simplicity and expressiveness, like Python and Elixir.

1. https://ieeexplore.ieee.org/document/1702967

2. https://link.springer.com/chapter/10.1007/978-1-4471-3003-1_...

3. https://www.amazon.com/gp/product/0735619670/ (Chapter "Developer Testing")


I ported a fairly complex (thousands of lines of code, in use in production) program from Python to Go.

The Python version is around 1/3-1/2 the size of the Go version, but the clarity with which bugs are visible in the Go version is astonishing.

There's a lot of code in Python version that "looks right" and even "does kind of the right thing a lot of the time." But the Go version lays bare corner and edge cases that were not obvious in the concise Python version.

I agree that Go usage does suffer sometimes from lack of generics. I've sometimes used code generation in its place, and it works, but it's a poor solution. At the same time, the lack of generics has helped me avoid over-abstracting in other places, which is a common bane of mine whenever I work in more "expressive" languages.


The results on code lines being directly related to bugs is getting old. Programming languages and practices have changed substantially in the past couple of decades. I'd want to see more recent evidence.

Besides, in practice I find lines of code in Go vs. Python are only maybe 20% different in many common cases, if you ignore the struct declarations which I have a hard time calling "code". (Being not a Turing complete language, that particular subset can't be considered vulnerable to the issues that Turing complete things can have.) I find myself very frequently wondering how many people complaining about all the lack of abstraction in Go have A: used it and B: used it to the point they're writing Go in Go. (Go just blows if you're trying to write Python in Go or something, or if you insist that you know better and must use map, filter, etc. rather than for loops. For all languages X, writing X in X is important, but Go is really graceless if you don't learn how.)

Now, if you do get to B, it is still true that Go does not quite abstract as "well" as some other languages, but it does abstract well enough that you can start having reasonably conversations about whether those other abstraction tools are actually paying their way in a lot of larger, shared code bases. If we throw out a ton of language complexity, but only pay a relatively small price, are they really carrying their weight? The answer will vary from code base to code base, but I think it's not necessarily some sort of horrid, unthinkably idea that not every program must be assaulted with the most powerful, complicated toolkit regardless of the price incurred by the complication.


>The results on code lines being directly related to bugs is getting old. Programming languages and practices have changed substantially in the past couple of decades.

The results in those "past decades" used all kinds of languages, some far more advanced and modern than Go is today (e.g. Lisp, Smalltalk, Ada).


As someone who programs both in Lisp and in Go (and I have used a little bit of Smalltalk), I have to comment your statement.

In a sense, Lisp is the most advanced language ever designed, and still, it is ignored by most of the programming community :). Equally, I think, Smalltalk does object orientation better than most languages which followed it.

Both languages are mostly dynamically typed, so comparisons to static typed languages need to take this into respect, they are different beasts. On the other side, I think Go has an interesting mix out of static and dynamic typed features, having interfaces for static contracts and interface{} as a fully dynamic type, which still offers full type safety at runtime.

Go structs have also some interesting properties, if you look at them beyond being containers. Any type can have methods, and structs can embed other structs and thus "inhert" their methods in a sense. This is not full blown object orientation, but gets you surprisingly far.


If we're going to quote science, we need to use it scientifically. It may be the case that all the changes in the decades since those studies are irrelevant, but we really can't just assume that. Even if we assume that the results of those studies are even relevant to real programming in the first place, since IIRC they, like almost every other study ever done on programming, did all their analysis on completely toy problems.


I'm not sure I would have called Python simple, it's a great language and it's not as complicated as something like C++ but it's not simple.

For example when should I use NamedTuple and when should I use a dataclass? Should I be using a list comprehension or a map/filter/reduce? When should I split that list comprehension? I have seen (and made myself) terribly long lines with only list comprehension and it was hideous.


When I said Python was simple I was talking about the concepts, functions and datatypes in the core of the language. I agree that if you include all the modules that come with Python it is a "big" language.


> Should I be using a list comprehension or a map/filter/reduce?

yes, use list comprehensions -- they are more readable.

>When should I split that list comprehension?

just use common sense. if you can read it and understand without thinking too much, that's good.

it's readability beats short code 100% of the time.


You can write perfectly fine Python without any of the features you mentioned.


Sure, but the more language that you may not use to write perfectly fine programs but have to know to know the whole language the more it seems superfluous. (See the whole drama over walrus operator). Python has accumulated a bit of rust also, old ways of doing things that were replaced with more modern ways, but that you should not use often. New languages or languages that don't evolve fast don't have them. python is becoming more and more complex (not tragically like c++) and v1.7 was indeed much simpler but it's still a nice language considering its age and scars.


To add, here's a nice article on implementing map from scratch: https://blog.burntsushi.net/type-parametric-functions-golang...

It illustrates what you have to pay to get this kind of functionality for arbitrary slices.

I wonder how this would change with generics.


And yet introducing higher level functionality also leads to more difficult to detect bugs. I'm not sure the "lines of code:errors" correlation truly applies to all languages.

That said, anecdotally I too have found it much easier to find errors visually in go programs due to its simplicity. I find the same is true of C vs. C++.


> One example of Go's lack of expressiveness is that loops are not abstracted away, e.g. you can't use higher level constructs like map and filter on collections.

This is (or was) deliberate, with the rationale being that it makes O(n) blocks of code trivial to identify in review.

Whether or not you buy that argument is a separate thing, of course.


Can you explain why you think it makes O(n) blocks trivial to identify?

How is such identification made easier by manually writing out a `for` loop applying a function foo to each element rather than writing `map foo myCollection`?


In every mainstream language I can think of `map foo myCollection` creates an intermediary map.

Memory allocation is so expensive that making that copy is often more expensive that calling `foo` on each element.

Sometimes making a copy is exactly what you need and there's no way around that cost (but hold that thought).

But I've also seen `sum map foo myCollection` so many times (especially in JavaScript).

Here you have a short, neat but also extremely wasteful way of doing things. I see it so frequently that I assume that many people are unaware of this cost (or completely disregard performance concerns).

If you were to write this imperatively, it would be obvious that you're making a copy and maybe you would stop and re-think your approach.

But there's more.

If you're paying attention to performance, an easy way to improve performance of `map foo myCollection` in e.g. Go is to pre-allocate the resulting array.

For large arrays, this avoid re-allocating of underlying memory over and over again, which is the best a `map` can do.

In imperative code those costs are more visible. When you have to type that code that does memory allocation, you suddenly realize that the equivalent of `map` is extremely wasteful.


I don't want to discuss this forever but I have a couple of comments:

Your points about efficiency are a separate topic entirely from the original claim that manually writing out an imperative solution makes it easier to see the algorithmic complexity. That was a surprising claim to me because, in my experience, if I understand what some HOF is doing, reading the code is even easier because there is less of it to wade through (and mental exhaustion doesn't make one easier to read vs the other).

> In every mainstream language I can think of `map foo myCollection` creates an intermediary map

You need to build up the final result, sure. Not an intermediary map but whatever structure (or functor) you're mapping over. That is the whole point of immutable data structures. Also when you're using persistent data structures, which all modern FP languages do, the cost of constructing the result can be far less than what you expect, especially if the result is lazy and you need only some of the results. There is a cost to immutability and if it's unbearable in some situation, fall back to in-place mutation but the semantics of these two approaches are definitely not the same.

> But I've also seen `sum map foo myCollection` so many times

Yeah... that should be a fold (reduce, whatever). :)


>> In every mainstream language I can think of `map foo myCollection` creates an intermediary map.

Language support for Transducers can fix this, you can compose functions like map / filter / reduce over a collection and only hit each item once.


Even Java can do this. You only pay for another Collection<T> if you actually #collect the elements of a Stream<T>.


Is Rust mainstream? If so, this code:

    numbers.iter().map(|n| { ... }).sum();
Compiles to a plain loop, with no allocation.


(You want for_each not map, by the way)


Wait, why? AFIAK for_each doesn't return another iterator, and I'm calling sum afterward.


.... hacker news cut off the “sum” part, and so I didn’t see it. On my phone it just happened to clip it exactly long enough that it was still valid syntax, wow. Anyway, sorry and my bad!


The way many people read code, it's much easier to overlook three characters than it is to overlook three lines with indentation.


That's quite an interesting, if frightening, take on it. Assuming a developer knows what `map` does, then I wouldn't have very much confidence in their reading comprehension if they somehow mentally skipped over the main higher order function being called on a three word line of code.

Would anyone expect such a developer to read and parse several more lines of code more reliably, in order to understand the algorithmic complexity? Seems unwarranted to me...


Humans make mistakes. People code review after exhausting days or late at night. Or sometimes they skim more than they intended to. You want to make problems in the code obvious everywhere that you can.

In programming, if you miss an important detail the repercussions can be high. In a million line codebase every idiom that's slightly more complex than it needs to be is going to result in dozens of additional bugs, sheerly because of the increased surface area for making mistakes with that idiom.


I agree with everything you've said here but being mentally exhausted doesn't make it more likely that you can read even more lines of code in a more reliable fashion.

I think the clue to your thinking, for me, is in your description of the `map` HOF as "slightly more complex". Having years of experience with both paradigms, I've found that grokking a call to one of these fundamental building blocks (map, filter, reduce, fold, etc) is nearly instantaneous. We've all experienced reading prose where the author was excessively verbose when the same point could have been made succinctly. It feels the same way reading for loops once you get over the learning curve of these very basic functional constructs. You have to keep repeating that boilerplate endlessly and it's very tedious to keep writing and reading it.


> Humans make mistakes.

Yeah, and it's much easier to make mistake with a loop than with a map or filter.


>This is (or was) deliberate, with the rationale being that it makes O(n) blocks of code trivial to identify in review.

Whereas, greping for .map(), .filter(), .reduce() wouldn't?


> This is (or was) deliberate, with the rationale being that it makes O(n) blocks of code trivial to identify in review.

Like how

    for (int i = 0; i < strlen(str); i++) { ... }
is O(n)?


Let's give them the benefit of the doubt, and consider that they meant that:

  for (int i = 0; i < strlen(str); i++) { ... }
could be grepped and easily discerned from:

  for (int i = 0; i < str_length; i++) { ... }
and generally, that every looping point could be easily identified.

After all, if you call functions inside the body, all bets are off as to what complexity the loop means anyway...


So how does it work exactly? Are functions forbidden?


Functions (and methods) are the only way to "hide" computation and allocation in Go. That rubric makes reading Go code to build mechanical sympathy relatively straightforward.


> What it gains in simplicity it looses in expressiveness. This leads to more lines of code per feature, and bugs in software has been shown to be a function of LoCs

If the lack of expressiveness equals to greater amount of code and thus more bugs (in Go), does the same apply for languages that are more verbose (say, Java)? And if language is more expressive and less verbose (say, Python or Javascript maybe?), does it mean less bugs?


The OPs claim that it has been shown that bugs are a function of LoCs is unsubstantiated at best.

The number of bugs in a program correlates with the number of LoCs, but that's about it (correlation does not imply causation).

The claim is also quite absurd, since, for example, you can pre-process any C or C++ program ever written into a single line of code, yet this operation doesn't reduce the number of bugs in these programs, therefore the amount of bugs is not "just" a function of the amount of LoC.

Languages with better abstraction capabilities than Go (e.g. Rust or Python), might require programmers to type less, but the resulting lines of code are often dense. For example, I'm quite comfortable with Rust iterators and Python lists comprehensions, but it still takes me much longer to parse what code using these abstraction does, than code that just uses a simple for loop. It definitely feels great to cram a 10 LoC loop into a 1 LoC list comprehension in python, but reading that code 6 months later never felt that great, even for code that I've written myself. I just need to stop at those lines for much longer.


Really? For loops are a nightmare. Here's one from SPEC 2006:

    for (dd=d[k=0]; k<16; dd=d[++k])
    {
        satd += (dd < 0 ? -dd : dd);
    }
The problem with for loops is that they are too flexible, and so there's more opportunity for misuse. Iterator functions are less flexible, and therefore easier to read.

And this excessive flexibility doesn't even make simple things simple. Try counting down from N to 0 inclusive using unsigned ints.


Can you solve the exact same problem using Python list comprehensions or Rust iterators?

I'd like to see how solving this same problem in those languages is less of a nightmare.

I personally find that C loop quite readable compared with the same iteration being done in Rust.


This is a sum of absolute differences, so in Rust:

    let satd = d.iter().cloned().map(abs).sum();
As an added bonus, there's no undefined behavior!


Loops in go are much less flexible than loops in C, you'd never see code like that in go.


In Go that loop could be:

    for k, dd = 0, d[0]; k < 16; k, dd = k + 1, d[k + 1] {
        ...
    }
I don't find that significantly more readable than the C version.


Yeah a quick glance and I have no clue whats going on, tl;dr in review for sure


>The number of bugs in a program correlates with the number of LoCs, but that's about it (correlation does not imply causation).

No, but it suggests it, and in this case, there's not many other options that could explain this than causation.

>The claim is also quite absurd, since, for example, you can pre-process any C or C++ program ever written into a single line of code, yet this operation doesn't reduce the number of bugs in these programs, therefore the amount of bugs is not "just" a function of the amount of LoC

That's an irrelevant strawman. When people speak of LoC they mean as to how many LoC are needed for various tasks based on the expressiveness of the language. Not whatever old LoC based on arbitrary adding needless lines (e.g. by preprocessing to inline function calls)...


> The OPs claim that it has been shown that bugs are a function of LoCs is unsubstantiated at best.

But when looking at OP:

> bugs in software has been shown to be a function of LoCs [1,2,3].

He provided three different sources! Maybe it's wrong, or maybe you don't believe it, maybe it doesn't fit with your beliefs, but you can't say the OP claim is unsubstantiated when he provided academic sources on the subject!


In general, yes. The more lines of code per feature, the more bugs per feature. Also, the guarantees you get from a typed language like Java does not seem to prevent bugs to a meaningful degree. Probably things like programming culture associated with a language is more important, like a strong preference for testing.

This is an interesting study on programming languages' effect on software quality: https://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf


I wonder if everyone actually does initial testing and bug fixing after committing, because otherwise any effect of programming language choice might be gone before the code reaches the commit logs.

I'm not sure it makes sense to consider code correctness separately from productivity. You can probably get code written in any language to roughly similar levels of correctness. But how much work does it take to get there? That's what I'm really interested in.



Ok, I understand that people that use Go has a different idea about it, but...

> Code duplication is more acceptable in the Go community than elsewhere.

> Many times, you’ll need to write that functionality yourself, or copy/paste it from a StackOverflow answer.

Am I the only person that this is regression? How can code-duplication be defended by 'simplicity'?

> Fewer dependencies means fewer things that break while you let a project sit idle for a couple months.

I thought that pinning dependencies is \the\ solution for this? I don't use Go, so I'm not sure about the Go packaging story, but is there no package pinning solution? Yarn & npm handles this beautifully...


> Code duplication

It can be easier to maintain a small amount of code that you duplicate and check in and is purpose-built to your use case and treat as your own, compared to a general library imported for one routine that you have to think about (licensing, upgrading, auditing). Or sometimes libraries break backwards compatibility to add features you aren't interested in, and it makes busy work to stay up to date.

> Is there no package pinning solution?

Multiple dependency managers have supported pinning since at least 5 years ago e.g. glock, dep, glide. Go Modules supports pinning.

> Pinning is the solution for avoiding breakage due to dependencies

Related to the first point, in general you have to do work to upgrade your dependencies. If not because you care about bug / security fixes that happen, then because you use ANOTHER library that has the same dependency, and that other library uses a feature only present in a newer version. Any time you upgrade a library it takes work to verify nothing has broken, and potentially work to fix things that do break.


> It can be easier to maintain a small amount of code that you duplicate and check in and is purpose-built to your use case and treat as your own, compared to a general library imported for one routine that you have to think about (licensing, upgrading, auditing). Or sometimes libraries break backwards compatibility to add features you aren't interested in, and it makes busy work to stay up to date.

The overhead you're talking about isn't a real problem in my experience, as long as your dependency manager has semver support (all of those I know).

When writing (or copying from SO) a simple snippet that's supposed to do just what I want, almost 80% of the time there will be a bug in it (typically edge cases are not covered)…

For instance, before Go 1.10, there was no rounding function in Go's std lib, and 99% of the code you could find on the internet to do this (simple) task was buggy: some failed with negative numbers, and others due to bad handling of floating point numbers…


A lot of the answers on stack overflow are garbage. That whole site sometimes feels like the blind leading the blind.


> It can be easier to maintain a small amount of code that you duplicate and check in

> compared to a general library imported for one routine that you have to think about

IMHO, that's why small, one-utility-function libraries in the JS ecosystem([0], [1], [2]) are useful. (Unlike the general tone in HN where npm and left-pad gets a lot of hate.)

Look at Quickutil[3] for examples of other languages. (In this case, Common Lisp.)

[0] https://github.com/sindresorhus/mem/blob/master/index.js#L7

[1] https://github.com/sindresorhus/tempy/blob/master/index.js#L...

[2] https://github.com/zeit/ms/blob/master/index.js#L48

[3] http://quickutil.org


My current mayhem is containerizing an object detector pipeline/system with dozens of gigabytes of dependencies. The main dev has pulled in a library that makes opencv look like small potatoes, just to be able to use a whopping two functions. Apart from the misery that is a 4 hour container build every time something breaks, someone upstream broke my implementation (python 2, because ROS, ugh) with a casual '@' for matrix multiplication.

Yeah I love DRY and code reuse as much as the next guy, but sometimes copypasta makes sense.

I will say go's nature and/or culture make grafting in functions a lot easier.


One thing I omitted in the response which might be relevant is that usually you can only use one version of a package in any given build[1], so you have to use a version that works for everything that depends on it.

I think that might be a difference for folks coming from other ecosystems.

[1] The asterisk is that you can have multiple major versions coexist using Go modules, or you can use gopkg.in import paths, or some other workarounds, but in my experience that is not common and typically you do have to find a single version that works for your whole program.


I do not think Go Modules supports pinning except at the major version level and even there v0 and v1 are treated as fluid.

My experiences trying Go Modules has shown it to “randomly” modify the go.mod file.


The go.mod file implicitly pins to the lowest version that meets the requirements, as I recall.


Often developers create an abstracted mess to avoid code duplication.

The code is hard to follow and modify later if the common functionality needs to be handled differently.

There are many times when working on my old code that I was burned by this.

Code duplication isn't necessarily a good thing, but the cure can be worse than the disease. It is better to be pragmatic.


>Am I the only person that this is regression? How can code-duplication be defended by 'simplicity'?

Easily. People can avoid code-duplication in many ways, and some of them are harmful.

E.g. by building abstractions that get out of hand and add mental burden to track the program flow and bugs.

Or by making any small piece of code into a standalone 2-3 line function, so that anything always happens elsewhere, and keeping track of flow is made difficult.

Or by prematurely putting code into a function when it's just used in 2-3 places, and then when another need arises that doesn't fit in to the already captured function, hacking the function to handle different behaviour on different arguments/flags and ending up with monstrosities.


Code-duplication and goto statements have something in common: neither is inherently bad, and both are zealously maligned inappropriately by the developer community in aggregate. Sometimes two functions perform the same abstract operation on the data but have subtle differences in the implementation: fully duplicating the common parts in two different functions is not a bad thing necessarily. But "testability" and "reusability" zealots will turn blue in the face arguing that one should carve the functions up into a mess of abstracted spaghetti to support them both. The result is code that isn't any easier to read or maintain and, worse, is less performant.


Code duplication reduces coupling between multiple unrelated pieces of code.

Edit: not sure why the downvotes, perhaps these people have never seen a junior engineer go on a fanatical deduplication spree across a whole project, or perhaps these are the junior engineers.


A little code duplication prevents mounds of dependencies.

I’ve seen people link to multi-megabyte modules to save one line of duplicated code. At Google this was a disease. Good to see they’re fixing it.


Surely there is a middle ground.


"Don't make the code so DRY it chafes"


In Go, the informal rule is to duplicate the second time you need a bit of code, and factor out only on the third time.


Yes, the middle ground is to avoid duplication when code is non-trivial.

Most code is trivial.


I think there's more to it than finding the middle ground.

The question we should ask is whether or A should change whenever B changes. If the answer is yes then there should be a dependency. If the answer is no then no dependency should exist, even if some lines of code happen to be identical at a particular point in time. (That's probably what you alluded to when you said "unrelated pieces of code" and I agree with that).

The number of lines of code shouldn't be the primary concern though.

For instance, if some tax is calculated in a particular way as a matter of law, then that calculation should exist exactly once in the entire code base, even if it's just one line of code.


> The question we should ask is whether or A should change whenever B changes.

Spot on.

Avoid applying DRY principles to incidental duplication.


Exactly right.


It's not just junior engineers. Senior engineers do this, too: fanaticism and adherence to some set of practices no matter what, regardless of the context are as much a bane in this industry as chasing the latest shiny is.


It’s easier, to me, cause I can open almost any Go source file and expect pretty much some vanilla looking code and not some cleverness from a dev that is no longer working there for whatever reason.

Look at our fiction and non-fiction writing; very few real plot loops and themes. There’s a common wisdom in writing I’ve heard called “kill your darlings”, those bits of prose that you love in isolation, but big picture, really don’t add to the detail of the world or motivation of the character

I do the same with code: keep code itself vanilla af to avoid complicating or distracting from the big picture of its value

Oh and that’s at work. To be certain, I fiddle with favored abstractions and ideas on personal projects when the mood strikes. I cram assumptions into abstractions because it’s just me.

But at my job, where I only go because of social pressure to conformity, I don’t really want to wade through other people’s emotional opinions. I just want to copy/paste together same old to keep it easy.

Give a programmer a job building some thing real like a car and you’ll get The Homer.


As with anything else this depends on context (code, assignment, goals, etc). One sentiment that I share with OP is that somehow, you do feel less pressure to over-dry than other languages. Probably because (1) more code doesn't necessarily feel like more bug surface (2) you can avoid a class of issues like parameter (mis)usage and excessive branching that you would need to satisfy de-duping.

Obviously trivial cases of code duplication could/should be resolved, but for less than trivial cases sometimes its just fine to cmd-c & cmd-v with some minor use-driven modifications.

Remember the lib versions of many of these snippets need many many parameters to satisfy everyone's needs.


Most people are thinking of gos simplicity vs. javas over abstraction of oop. I personally prefer the ugliness of go to the over abstraction of java.

However go is missing fundamental concepts of algebraic data types that in itself is causing problems. In their effort to create a language that moved away from the flaws of OOP they failed to have knowledge about type theory and created a language that in removing the flaws of OOP they have also accidentally removed fundamental language concepts. My guess is that they didn't know about the theoretical concept of sum types at all because sum types exist in Java as inheritance.

The two biggest being parametric polymorphism and sum types and it is these two things and the consequences of these two things that are causing the ugliness in go. Everything from default zeros, functions that return both errors and values and repeated code.


> Just Enough Batteries Included

This is just an anecdote, but recently I decided to give golang a spin, and wrote a small command line utility in it.

I won’t tell you the pain of trying to find an argument parser with support for GNU-style long options. I suppose that’s something I should write myself rather than use the standard library or find a third party library, because simplicity. /s Edit: folks posted helpful suggestions, I take this paragraph back.

Instead, let’s talk about something basic, regexp. Turns out golang’s regexp package supports named capture groups, but if you want to actually refer to the capture groups by names, you need to manually pair FindStringSubmatch result with SubexpNames in a loop. Can’t make this shit up. Seriously, what’s the point of named capture groups if you can’t refer to the groups by names? If this is just enough battery included, your expectations might be a tad too low.


For gnu style long options you want: https://github.com/spf13/pflag

It is a drop in replacement for the built in flag library.

I ususally start out with the built in flag library and migrate to pflag when things get a bit more complicated.


Thanks, not sure why I missed this. Probably should have used this instead of settling on urfave/cli. I take back some of what I said.


For CLI it’s true to the Go mantra. If the flags library is “good enough” why not just use it and avoid being fancy?

Besides, isn’t it feasible someone might confuse long and short options? “Is -d and —delete the same, or is -d —debug??” KISS even if it’s annoying.

As for loops, I know that they’re generally more common in Go. Another example is finding if an element exists in a slice. Most of the implementations is this stuff is just a loop under the hood in the end.

From a Java background it was strange to me because I’m used to dealing with classes and there’s always a method for this or that...


> For CLI it’s true to the Go mantra. If the flags library is “good enough” why not just use it and avoid being fancy?

Users of a command line utility ultimately don’t care what language it’s written in; having a familiar interface is a much more important goal. Since the vast majority of Linux utilities these days use GNU-style rather than Unix-style long options, it stands to reason that the latter is a better choice for users. (There are also technical advantages I won’t get into.) I know Rob Pike probably isn’t a fan given his background, but why not give developers the choice to build what they want to build, instead of shoehorning them into something they dislike? What’s the point of being opinionated here? (Edit: Sorry, didn’t mean to imply that the standard library ought to have it; however, I do find it strange that in the ten years of golang the community didn’t bother to come up with a reasonably good library for it.)

Btw, flags is very primitive and can hardly handle anything complex, even if you accept the Unix-style.


Sorry for the bother but I'm a big lost due to the lingo and the interwebs are unclear: by unix-style long options do you mean single-dash long options (e.g. `-long` being a single option), and therefore flags not allowing concatenating short options (e.g. `-r -x` can't be written `-rx`)?


> by unix-style long options do you mean single-dash long options (e.g. `-long` being a single option)

Yes.

> and therefore flags not allowing concatenating short options (e.g. `-r -x` can't be written `-rx`)?

Depending on the implementation concatenation of short options may be allowed; it certainly could be ambiguous. I'm not sure about flags (the golang package) though.


There are several libraries for it. urfave/cli is my go to.


I literally couldn't disable subcommands forced by urfave/cli. I tried very hard to disable the help subcommand.


I've found that Cobra[0] does pretty much everything I've needed via a reasonable API, but I'll admin I'm not fussy about CLI utility behavior.

[0] https://github.com/spf13/cobra


This is what you're looking for: https://github.com/spf13/cobra They are many libs that do that, you probably didn't search very long ...


This is one of the options I evaluated but rejected.


Why is that?


It assumes you need a bunch of subcommands, except I only need a good old single command with options, a la curl. The help subcommand in particular is mandatory.



My feelings on Go are pretty mixed. I think a significant portion of software can be described inside the box that you’re given with Go but if you enter even moderately complicated territory it becomes a shit show.

I’ve had the displeasure of working on a pretty complicated Go project at work and it’s given me plenty of counters to what OP is saying.

First, Go’s dependency management has been terrible and inconsistent for years. Dependencies started very loose and without any versioning at all and I was bitten many times by this mostly because developers used the looseness of Go get to consume repositories that were never meant to be public APIs and without versioning those repositories had zero guarantees of compatibility.

The community eventually noticed this was a problem and vendoring was introduced but the community and Go itself flip flopped on a “correct” solution for ages which ultimately led to even more package incompatibility. Now we’ve got modules which is a step in the right direction but it’s going to take quite some time for all of the Go projects to implement it correctly. Modules have been nothing but a pain in the arse for us because our Go project has a lot of dependencies.

Another thing I hate is the idea that people spread that Go’s simplicity leads all developers to write simple, clean code. Some developers, I’ve noticed, pine for abstraction like moths towards a lightbulb. In Go abstraction can be very painful to do elegantly and I’ve seen some nightmares where developers have tried. Using reflection on interfaces produces some of the shittiest code I’ve seen in a long time and yet I’ve seen it repeatedly by different developers in different organizations.

I must not be the only one who has noticed these issues because the creator of Go himself made the Go Proverbs (https://go-proverbs.github.io/) which include things like “interface is never clear” and “reflection is never clear”.

Okay, I’m going to stop ranting now but I seriously don’t get the hard-on for Go. I think I kinda got it before I used it professionally.


This is exactly the take I have been expecting to unfold for a lot of developers regarding Go. It is a very compelling language to get started with because it is so reductionist compared to alternatives like Java and C#. This makes learning it very straightforward, to the extent that you would almost want to die defending the language that is feeding your dopamine loop.

But, one day you might wake up and realize it was all a fantasy. That is, if you ever step foot into a shop where you are expected to build a line-of-business application with sufficiently-complex business entities and rules. All of your ideas about how wonderful Go is will fly directly out the window. Sure, you can eventually implement anything in Go. It is Turing complete after all. But, that isn't to say you'll have enough time to implement it. Go advocates will happily dismiss the value of things like LINQ, generics & reflection until they are faced with a reality where tools like these are the most productive way get a feature into production before 2020 hits. If you open your mind up to the possibility of using different (god forbid, unique) approaches to solve problems, you might then find that Go does not even offer the best implementation of these alternative language features.


Also have worked in some pretty massive Go codebases. It's about writing the language idiomatically. If you're super concerned with trying to mimic Java-style abstraction and shoe-horning OOP into Go you're going to have an awful time.

If anything, I've found shipping code to be WAY faster in Go than Java due to the much cleaner build system and approach to testing.

The one main area where Go struggles is in math-heavy code where the lack of generics or algebraic types means duplication or code-gen.

But you're right. It's not the be all end all of languages. If anything, it's just a much better substitute for the use cases where Java traditionally excels.


This is not my experience at all. We've spent the past few years building a multi-hundred thousand line of code distributed system in go and have found that sticking to idiomatic go has been very beneficial in building the thing successfully.


Everything you describe screams to me of choosing the wrong tool for the job.

Go certainly isn't a general all-around fits-all tool, it's a rather specific and opinionated approach. It's almost a given than Go alone can't solve 100% of most company's needs, this is not Js or Python or C#.

So trying to shoehorn a classic OOP or functional paradigm into Go is just asking for trouble, and no the language won't change to become a clone of another (what would be the point?)

If generics and reflection are core to your solution, there are languages for that. Don't make your life harder. Not every codebase needs to be in one language only.


Docker and Kubernetes are not real world enough for you?

I have ported stuff from Java to Go, for my long running app, and very happy.

Also at a client place, over the past year, I developed stuff in Python and Go, where Java was traditionally chosen, and it led to faster execution times, and stable microservices.


If it so simple why are the developer ergonomics so bad, if err != nil is a mess (Elixir has a with clause that would be useful), everything points to github master and module management is still extremely basic. The error messages are absolutely atrocious compared with something like elm or even python or Typescript, the type safety is a lie fairly often and you can still write programs which crash due to that. Then there are go channels which are kind of cool until you realise they can very easily lead to race conditions and a version of the actor model would have been much better. I could go on but go is like a nicer C rather than a modern language people should be building web services in.


Isn’t the actor model just as racy as channels? Race conditions everywhere is one of the big downsides of actors compared to other concurrency models.


Within any distributed system you can have races but in Golang the mutable nature of channels makes this almost impossible to reason about (in my experience most people use mutex’s in go unless doing something extremely simple).


People who likes Go's simplicity, have you ever looked into the even simpler syntax of lisps? For example, Clojure have minimal amount of syntax, that once you know, you know the entire language. The rest is just libraries that you use, the actual program/data structures or language modifications (macros) that each codebase usually employs a bit different depending on needs. But lisp languages are really, really simple, even more simple than Golang (or any C like language) can be.

Take a look at this (short) page which describes ALL the syntax of Clojure (except macros) if you're interested: https://clojure.org/guides/learn/syntax


As a long-time Clojure user, I find the claims that many Clojurists put forth that Clojure “has a minimal amount of syntax” to be quite misleading, especially toward beginners.

I think Clojure has a lot of syntax. The syntax is just embedded in a number of conventions, macro mini-languages, and the reader syntax itself. Because Clojure’s syntax can be extended in user space (via macros and related tooling), the syntax also grows more rapidly than other communities.

As a professional programmer who came from Python, Java, C, and JavaScript, I found a lot about Clojure compelling, but “minimal syntax” was not one of the compelling points. To the contrary, I think there is a lot more to learn in Clojure than other languages about how to properly structure your code.

As I wrote in my blog post comparing Python and Clojure:

> ... the Python programmer will observe that in the Clojure program, many aspects of the program are implied, rather than annotated by special syntax. Many Clojure proponents will say that Clojure has a “simple syntax”, but I think this is misleading. They don’t mean simple as in “easy to read without prior context”. They mean simple as in “unceremonious”. Perhaps they mean simple as a contrast to “baroque” (how you might describe Java or C++’s syntax). Clojure does have a syntax, but it is implicit from the layout of the code in the list data structures.

http://amontalenti.com/2014/11/02/clojonic


Agreed - a concrete example is the threading macros:

    (defn calculate* []
       (->> (range 10)
            (filter odd? ,,,)
            (map #(* % %) ,,,)
            (reduce + ,,,)))
This is nominally a library as it can be implemented via the language primitives. But in practice this occupies the same space as Haskell's do-notation, and the learner cannot ignore it. The lack of special syntax becomes an implementation detail.


People who like "Go's simplicity" are talking about a completely different kind of simplicity than Lisp's or Forth's or Smalltalk's.

The latter are simple in that they're built around a few core concepts which are available to the developer, almost any tool the language designer had is available to the end user. They're simple in the sense that old school legos or meccanos are simple.

Go is simple in the sense of simplistic, it provides a much larger number of restrictive features and most definitely doesn't give end-developers access to the language designer's tooling. It's the simplicity of playmobil or duplo: the developer is provided with much more concrete (and effective in a way) but much less flexible "primitives".

And this is no secret, there's a pretty famous quote from Rob Pike explaining the design and purpose of Go:

> our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

Making codebases look uniform and preventing building abstractions (by the language being actively hostile to abstractions which are not built-in) is in line with that.


"Our programmers are Googlers" comment is a retcon! It was from 2014; what was the pitch before that?

In 2010 Pike presented Go as "suitable for writing systems software" [1] such as: web servers, web browsers, compilers, programming tools, IDEs, and OSes ("maybe").

Today Google makes a web browser, and lots of compilers, and programming tools, and an IDE (Android Studio), and at least two OSes (ChromeOS, Android), and Go is not relevant in any of them (excepting Go tooling).

In 2012, Pike was still pitching Go as a C++ replacement, but was dismayed that it was appealing to Python/Ruby programmers. Why don't C++ programmers use Go? He hypothesized [2]:

> C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way.

The saltiness was aimed at Google's own C++ engineers, but I honestly don't think that Go was or even is up to reimplementing v8 or Blink.

But today Go really is enormously successful and compelling. It failed at displacing C++ but found its niche. Kudos. But retcon - it is a mainly server-side language because that's where it succeeded, not what it was designed for.

Today Rust is busy displacing C++, so it can be done. Go just wasn't the one to do it.

1: https://web.stanford.edu/class/ee380/Abstracts/100428-pike-s...

2: https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


What is with all of the condescension that other developers have towards Go developers? Go has a set of highly desirable features that are very practical for building large team projects.

Imo it's unnecessarily hostile to compare go to playmobile and diplo, and it really encourages people to overlook the genuinely excellent features that go supports.


There's unreasonable hostility towards Go because it is eating the lunch of some folks' favorite languages.

It's not rare to see people feel offended by others being productive with Go and liking it.


> some folks' favorite languages

Java/.NET, Python and Ruby? I say bring it on.


Yes, I've been programming in Lisp and Racket for the past twenty years and recently switched to Go. The main reason was the availability of a large number of packages and the general tooling for Go. So far, I'm quite happy with my choice. Productivity is slightly less than with Racket, but outweighed by the small size of the executables, easier deployment, and more 3rd party libraries. Of course, I'm missing a lot of features that pretty much any Lisp offers, but the simplicity and static typing does overall have many advantages.

Since you mention Clojure: That's useless for me, because it runs on the JVM. One reason to use Go is exactly that it produces static, self-contained executables that don't require any heavy infrastructure.


If you care about a rich standard library, and static self-contained executables, I would strongly suggest Crystal. I have used both Go and Crystal, and the latter is much more productive as well as simpler. It’s standard library is richer. See, for example, the things you can do with strings with the provided functions. I used to love Go. But after some time the verbosity got tiring. I moved to Crystal, and there is nothing I miss from Go. Two big caveats: 1. I dont work in a big team. 2. My main use case is a web-backend.


For small executables, check out Chicken Scheme. For static typing, see Typed Racket (though Chicken Scheme is also moving in the direction of gradual typing, if I recall correctly).


Lisp is simpler on a completely different dimension. Lisp code can be arbitrarily complicated to read due to arbitrarily complicated macros. This is frequently blamed for Lisp's continuing failure to take off, that every major program written in it is eventually actually written in a custom dialect of Lisp. This directly opposes, for better or worse, the sorry of simplicity Go has. (I think there's a place for both, so I'm not judging one as "better" here.)


Avoiding Lisp because macros could be abused is like avoiding C because functions could be abused. What if I create a function called "log_error" that actually tries to reformat your hard drive? There is nothing in C that requires me to give functions names that aren't nonsensical.

Macros, like nearly anything else, are an abstraction that requires a little bit of restraint and common sense when you're using them. The answer to the "functions can have misleading names and therefore can be used to create intractable tangles of incomprehensible code" is "well then don't fucking do that", which is the same answer to the majority of problems people think macros cause.


> Lisp code can be arbitrarily complicated to read due to arbitrarily complicated macros.

Lisp code can be arbitrarily simpler to read due to nice macros.

> This is frequently blamed for Lisp's continuing failure to take off.

This is largely a meme spread by people who have never developed in Lisp.

The relative lack of popularity is discussed in Lisp communities, but that particular hypothesis isn't taken seriously, if it comes up at all.


The simplicity of Go isn't in a syntactic minimalism. It's in the self-conscious restraining of abstractive power (eg. the article's example of the only generic data structures being slices and maps).


We already have a term for that: “low level”.

I assume that people are trying to attach it to the term “simple” because that word has a much more positive connotation, but the meaning isn’t the same.


“Low level” often means “close to the metal”, though.


An important difference is that Go not only has a concise set of constructs, it also tends to deliberately avoid abstraction.

So any given piece of code is very explicit and self-contained. Apart from function calls, you can tell exactly what it does without looking up anything else, like a macro definition for example. It similarly doesn't have a lot of different ways to do the same thing tends to be opinionated about idioms (gofmt, for example).

Something like a Lisp DSL would be antithetical to the Go model of what simplicity means.


> Apart from function calls, you can tell exactly what it does without looking up anything else, like a macro definition for example.

It sounds like you’re splitting hairs here. In my experience, macros aren’t any more mysterious than functions. The only difference, after all, is when they’re evaluated.


The difference is that they generate code. So now you have compile some code in your head as well as running your hypothetical execution. Nothing wrong with that but they tend to be harder to understand than functions.


The other crucial difference is how their arguments (and result) are evaluated.


Go does not avoid abstraction, only certain constructs. You do have functional abstraction with high order functions. You do have types and methods.


Yep. Copy-paste for the win.

Maybe they should do away with the abstraction that memory is endless.


Not looked into it, but the simplicity of Go doesn't only come from its syntax.

It also comes from the standard libraries which cuts down on libraries you have to use.

For most of the stuff I do with Go I don't need extra libraries and when I use them, there are no suprises which mean I can easily check what they do.

Compare that to JS or even Python, where I have to either use libraries for even the most basic stuff or have a huge amount of syntax and keywords.


This is something that people overlook quite often. It is very important to have a well-documented, up-to-date, and capable standard library packaged with your language. It just creates a smoother experience altogether.

Besides that, it also sets the standard for the 'right' way of designing your interfaces and whatnot. In Go, the third-party libraries are often compatible with the standard library methods and interfaces, which is just fantastic in both mobility between libraries, and having consistency with different projects.


> Compare that to JS or even Python, where I have to either use libraries for even the most basic stuff or have a huge amount of syntax and keywords.

in python, you don't have to. same for javascript, before the whole jquery era, and the nodejs era that followed, we didn't even know what javascript libraries were.


I‘be looked into clojure briefly. What made it unattractive to me was the lack of performance and complexity of the build tooling. Simplicity in go goes beyond the language itself. It is also very simple to build, depend on libraries and deploy.


Go has had an absolutely appalling dependency management complexity for a long time. Go modules looks like it is going to fix that but it's long time coming.


I am both using Go and Lisp and like both very much. As mentioned by others, the simplicity of Go isn't in the syntax, it is the conceptual simplicity. To large extend it does share this with Lisp. Especially if you look at scheme, most abstractions are built on top of functions, something you can equally do in Go, as you have first class functions and closures.


Simple syntax is why I prefer Erlang to Elixir. The syntax is so simple you can learn it in one day easily -- a big plus when you have to work in multiple languages at the same time for one project.


i agree with you. go is not the simple language its creators wanted. and i feel like it's only going to get more complex from here on.

for example, i couldn't get a library to work the other day because of one of the dependencies needed a specific version of another library but "go get" (1.13.4) was getting utterly confused.

i never looked into the modules thing and i found that there is a 4 parts series of articles on the official blog explaining modules are and why we need it. i just gave up at that point and decided to manually copy the version i wanted to the $GOHOME/src folder and it got me going...


FORTH syntax is even simpler than any lisp.


So is BF... that does not mean it not complex to use.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: