Hacker News new | past | comments | ask | show | jobs | submit login
Effective Go (golang.org)
311 points by mkl95 3 months ago | hide | past | favorite | 166 comments



Go has warts and 101 faults (depending who you are and how hard you squint)...

But what I absolutely love about Go is how easy(relative to other languages) it is to 'read' most of the Go code in the wild, including the std libs !

YMMV...

Every tried reading C++ libs/header-libs ?


This has always been my number one praise for Go as well. A lot of it has to do with gofmt I think.


Ja I think you are absolutely right (go fmt) ! The tooling is fantastic and concise.


As a team lead I love go because no one can write clever code in it and even a bad coder can generally write somewhat working code in it.


C++ libs are an exception. I'd argue Java stdlibs are just as readable as golang's.


Same feeling I had about Python vs Ruby. When I was working on a Python script I had no qualms about diving into the source of libraries. Although I have been scared away from Python because of the many different ways that libraries can be included in a script.


Agreed and that is before you get to all the "function annotation" that ads some degree of magic.


I want to see an “Effective Go” about unit testing functions that call other functions, without devolving into meaningless mocking and error propagation boilerplate.


I love testing in Go because I avoid meaningless mocking. Test structs that match interface signatures are meaningfully used to validate any state handling and/or error generation, and ensures error paths are properly exercised. In unit tests, we validate logs and metrics when appropriate in addition to return values. However, if you are mocking db, http or other net packages, you are likely doing it wrong. You want to know you handle an error from the db, you don't have to mock a db: you have a struct whose interface has 'GetUser(username) (*User, error)', and your test version returns an error when needed. The fact the real code will use a db is an implementation detail. You should be able to refactor the real implementation to change from a db call to an http api call and still have valid, useful tests. Elsewise, your unit tests are too coupled and hinder refactoring. Anyway, I love testing in Go; it is one of my favorite parts of working with it.


Disclaimer: I’ve never worked with Go, there may be some nuance here I’m missing.

This sounds like dependency injection of test behavior as a substitute for the “real” IO implementations injected in whatever glue code accesses the unit under test. If that’s a correct understanding, it sounds like mocking by another name?

Not that I think there’s anything necessarily wrong with that. And I think it that kind of inversion of control can often produce more robust designs/systems/tests. But I think it’s a good idea to recognize that’s what it is, and that it has similar limitations to other mocking techniques.


There's basically two ways I have seen this work. In one case, you have a "IO thing" stored somewhere in a struct.

  type example struct {
    backend db.DB // Never actually used the database library, so this MAY not be the right type name
    ...
  }
Then your code uses that to call the methods on var.backend, and you can replace it with a test instance. This feels approximately like dependency injection, maybe? Or maybe just encapsulation?

And the other is that you pass your "IO" as a parameter. A pretty typical example would be fmt.Fprintf.

  func Fprintf(out io.Writer, format string, args ...interface{}) (n int, e error)
It is of course limited, but all testing is limited. But, most "mocking" I have seen replaces the original type while the test is ongoing, and that specifically is pretty hard in Go. It is trivial to pass in a parameter, or set a struct slot, to something that fulfils a specific interface. And it is often quite useful. And, again, of course something that you may have to write tests for, to ensure that it does what you intend it to (especially if it is complex and needs to hold some state, which unfortunately sometimes happens).


Mocks are simply auto-generated and consistent test structs. And you still need to test the code which implements GetUser based on DB or HTTP or whatever else.


From what I understand (from making similar complaints to Gophers), if you're complaining about boilerplate, you don't get it.

The boilerplate propagation is the point in Go.

Go isn't DRY. Go would rather copy and paste a bunch of code than introduce the "complexity" of inheritance.


But you can obviously DRY without inheritance?

I'm not sure that Go pushes you to duplicate code... Do you mean copy a slightly tweak code for different use cases and types ?


I am okay with the level of abstraction available in production code. Where it gets ridiculous is the tests. Unit testing the simplest, most obvious Go code is a huge chore. Each error-returning function call costs 15 seconds to type but 15 minutes to work into the tests.


There is unnecessary friction, which doesn’t really seem to fit with the Go ethos. I would have assumed from afar that testing gets a lot of consideration in a language like Go.


It does. It sounds like the parent has had an unfortunate interaction with a bad codebase, because that is not accurate at all.


I am not being sarcastic when I say I want an "Effective Go" for this. If you have examples of high quality tests in an MVC-style service codebase, I would love to see them!


Others have recommended Advanced Testing With Go [1]. I haven't personally watched it, but Mitchell Hashimoto writes clean code.

But if you'd like I'd be happy to take a look at some of the problems you're encountering. I'm @alecthomas on the Gophers Slack or Twitter, feel free to DM me. No guarantees, but it sounds very unusual for an additional test to consume 15 minutes of setup.

[1] https://www.youtube.com/watch?v=8hQG7QlcLBk


"MVC" is not an idiomatic pattern in Go.


"MVC" is at least a half dozen different patterns at this point, and at least one - the one the closeparen is talking about, rendering responses to incoming requests out parts of a data store - is pretty common in Go.

What's different is that the "model" is often e.g. a bare sql.DB and not an object repository, and the "view" is a template called directly by the controller, but "MVC" - even this kind, which isn't the original kind - doesn't have to be an auto-wired DI interface-laden mess. PHP and Java just made it that way.


Maybe it would pay to be more specific, since "MVC" can mean a lot of things. We have requests come into handlers, which map between wire and internal representations. They invoke controllers, which implement the business logic and call gateways and repositories. Repositories wrap storage clients and gateways wrap HTTP or gRPC clients.

Do you not do this? What do you do instead? I guess I can see how testing would be less painful with fewer layers, but at the cost of the production code becoming more entangled.


When I read MVC I guess I take it pretty literally, in that you would have packages named model/s, view/s, and controller/s. That's what I'm speaking to. Of course, abstractly, many programs tend to be structured in some kind of layering scheme that's MVC-ish :)


Assuming the parent is being genuine in asking for help, I don't think is a very constructive response.


I don't understand why you would reply with this comment. If the question is "how do I do X?" and X isn't something that you should be doing, why is it not constructive to point that out?


Re error propagation: look at standard library’s JSON encoding source code. They use panic internally to bubble up errors from recursive calls and have a recover at the API boundary which turns it into an error value. I thought it was a neat trick, until error handling matures further.


What do you mean? If you compose functions (and objects) together then you have to mock things out one way or another, inject them in, and assert that your code interacts correctly with its dependencies.

This doesn't seem like a go specific issue but an engineering one.


In Java or Python you do not normally need write a test case for “if this dependency fails, it will be propagated to the caller.” In Go you have to do this or your line/branch coverage will be abysmal.

Java and Python mocking is also a lot more lightweight; you don’t have to generate mocks ahead of using them, regenerate them when the interface changes or work generation into your build process, etc. Richer reflection APIs make it a pretty casual handful of characters to mock something out.


> In Go you have to do this or your line/branch coverage will be abysmal.

If you're writing tests to improve coverage numbers then maybe you're motivation is wrong.

In most languages with automatic exception propagation you never know what exceptions can be thrown. Is it any different from not testing?


It would be interesting to know what errors we might get, but unit testing an error return branch doesn't tell us that. It tells us that any non-nil error will be returned.


> In Go you have to do this or your line/branch coverage will be abysmal.

If reliability is critical, having explicit error handling and coverage tools which expose error conditions which haven't been tested is very helpful.


I make a sequence of calls during the service of a request. If any one of them fails I want to stop and return the error to the caller.

Having to check this in the particular case of every call at every later underlying every handler, does not make my software more reliable in than when it is simply guaranteed in general.


Explicit error handling can improve reliability in some circumstances, because it highlights corner cases which are easy to ignore otherwise.


If your Java code doesn't test exceptions that could be thrown, your branch coverage is equally abysmal and your tools aren't telling you that.


The language's exception facility does what it says. We don't need a unit test to prove that in every particular case, any more than we need a unit test to establish that the language correctly carries out our assignments or function calls.


You picked two languages (Java and Python) with extremely weak guarantees about disposal of resources when exceptions are thrown. Any time I acquire a non-trivial resource I must make sure it's disposed of properly, via a `close` etc. method. (And these are most cases of interest; acquiring trivial resources e.g. memory shouldn't need error handling in Go either.)

This isn't theoretical. I review a lot of Python code and I would say in over 20% of cases I see a try block with a `finally` longer than two lines, it mistakenly uses a variable that might not be set when an exception is thrown earlier than the writer expected.


When a controller calls a couple of gateways and a database access layer, it is almost never holding resources. I would agree that testing error handling is more interesting when there are resources to clean up or fallback logic to implement. That's just very rarely the case. Mostly I just need to bail out of the request.


Java has try-with-resources and Python has context managers, I wouldn't consider either of them harder to use correctly than defer.


I'm not talking about defer at all. The contrast would be C++'s RAII.


> regenerate them when the interface changes

You don't need to do this in Go either. Embed the interface type in your mock type. You only need to implement functions that will be called in the function under test.


For effective testing in Go:

- Don't unit test so much (or expand your definition of "unit", or whatever semantic difference you prefer). Test functionality. (Unit tests can still be appropriate for large classes of complicated pure functions, e.g. parsers, but these don't require mocks.)

- Don't mock so much; rather, stub (or mock if absolutely necessary) at lower levels. Use the httptest server; use the sqlmock or sqlite drivers; use a net.Conn with canned data; etc.


I would appreciate any resources you could point me towards to help make this argument against the Staff+ engineering leaders at my company who are pushing standards that say exactly the opposite.


This sounds a lot more like company politics than a technical issue, but I would probably start with Mitchell Hashimoto's talk "Advanced Testing With Go" - along with the just, like, reading the tests / testing tools in stdlib. They didn't include httptest so you could spend time mocking away http.Client usage behind an interface!

(I should add that this is explicitly contra to e.g sethammons's suggestion above, which seems to be relatively common in the part of the Go community that come from PHP. I inherited a couple large projects that did this. Today they use sqlite instead, and both the program and test code is ~50% the size it used to be.)

For us, stub injection points come naturally out of 12-factor-style application design; the program can already configure the address of the 2-3 other things it needs to talk to or files it needs to output, etc, just out of our need for manual testing or staging vs. production environments. If you have technical leadership encouraging Spring-but-in-Go, you'll probably hit a wall here too though.

It's also possible you're simply writing too many functions that can return errors. Over-complex code makes over-complex tests; always think about whether you're handling an error or a programming mistake - if the latter, panic instead of returning.


Thanks for the suggestion. I watched the talk and found some new information, as well as confirmation of some things I had been starting to adopt. I don't find the stdlib very informative about my problem, since most stdlib packages are "leaf nodes" - not layers that call out to lower layers. I'll check out more of Hashicorp's tests as I suspect their code might be more similar to the kind of code I work on. From a quick glance, in all of Consul I see only a handful of Mockery mocks, suggesting they are doing something very differently.


Maintainable software projects are modeled as a dependency graph of components that encapsulate implementation details and depend on other components.

    func main
        foo, err := NewFoo()
        handle err
        bar, err := NewBar(foo)
        handle err
Given a single component, each external dependency should be injected as an interface.

    type Fooer interface{ Foo() int }
    
    func NewBar(f Fooer) (*Bar, error) { 
        ...
    }
Test components in isolation by providing mock (stub, fake, whatever, it's all meaningless) implementations of its dependencies.

    func TestBar(t *testing.T) {
        f := &mockFoo{...}
        b, err := NewBar(f)
        ...


You're giving a baby-level introduction to mocking in a thread about how this approach leads to low-quality, meaningless tests in some cases, and right beneath concrete suggestions about how to make tests better by deviating from this pattern.


I'm sorry if you've had bad experiences with this approach in the past, but it emphatically does not lead to low quality and/or meaningless tests. It's the essential foundation of well-abstracted and maintainable software.


Here's a test I wrote recently, in the style expected at my company. Tell me what exactly you think this is contributing to maintainability, or what you think could be done better. I spent an hour on this and found it pure drudgery. I half suspect I could have written a code generator for it in that hour instead. I had no idea whether the code really worked until I ran it against the real upstream.

The unit under test is a gateway to a configuration service.

https://dpaste.com/FZGC8R66K


It's hard to give solid advice based on a view of this single layer, but at a glance unless this gateway client is itself something to be extended by other projects, this is probably not something I would write test cases for per se. If "apipb" stands for protobuf, I definitely wouldn't inject a mock here but would make a real pb server listening with expectations and canned responses. (Our protobuf services have something like this available in the same package as the client, i.e. anyone using the client also has the tools to write the application-specific tests they need.)

The resulting code probably wouldn't be shorter, but it would exercise a lot more of the real code paths. The availability of a test server with expectation methods could also (IMO) improve readability. Instead of trying to model multiple variants of behavior via a single test case table, using a suite setup + methods (e.g. `s.ExpectStartTransaction(...); s.ExpectUpsert(...)`) would make clearer test bodies. Check sqlmock for something I think is a good example of a fluent expectation API in Go.


Wow! No wonder you find this tedious.

Your gateway struct hopefully looks something like

    type Fooer interface{ ... }
    type Barer interface{ ... }
    type gateway struct{ f Fooer; b Barer; ... }
    newGateway(f Fooer, b Barer, ...) (*gateway, error) { ... }
    func (g *gateway) StartTransaction(...)
That is, a gateway is something that depends on other things, modeled as interfaces, and provides capabilities as methods.

                 +--------------------+
                 | gateway            |
                 | - f Fooer          |
                 | - b Barer          |
                 | - ...              |
                 |                    |
    input -------> StartTransaction -----> output
                 | ...                |
                 +--------------------+
When you want to exercise this code, you want to construct an instance with mock/deterministic dependencies, so that you have predictable results when you apply input and receive output. That's the model: give input, assert output.

But your linked code is kind of different! Each subtest varies not the input but the behavior of the mocked dependencies. I understand the point: you want to run through all the codepaths in the gateway method. But is that worth testing? Do the tests meaningfully reduce risk? I dunno. It's not obvious to me that they do.

The use of gomock is also a big smell. Generating mocks kind of defeats the purpose of using them. I would definitely write a bespoke client:

    type mockClient struct {
     StartTransaction func(...) (xxx.Transaction, error)
     Upsert           func(...) (xxx.Result, error)
     AbortTransavtion func(...) (xxx.Xxx, error)
    }
Then each test case is simpler to express as

    for _, tc := range []struct{
     name        string
     client      *mockClient
     input       UpdateConfigRequest
     startRes    xxx.Transaction
     startErr    error
     upsertRes   xxx.Result
     upsertErr   error
     res         xxx.UpdateConfigResponse
     err         error
    } {
     {
      name:      "success",
      client:    &mockClient{StartTransaction: good, ...},
      startRes:  ...,
      upsertRes: ...,
      res:       ...,
     },
     {
      name:      "bad start",
      client:    &mockClient{StartTransaction: bad, ...},
      startErr:  ...,
     },
     {
      name:      "bad upsert",
      client:    &mockClient{StartTransaction: good, Upsert: bad, ...},
      startRes:  ...,
      upsertErr: ...,
     },
     ...
    } {
     t.Run(tc.name, func(t *testing.T) {
      g := newGateway{tc.client}
      output := g.StartTransaction(tc.input)
      // asserts
     })
    }


This doc goes over pages of conventions, patterns, design advice, tips, explanations, etc. without once using the idiom "best practice". It is balanced and sometimes mention that advice can be taken too far.

I salute you for that!


I always refer myself and junior developers to Effective Go to sharpen their skills when working in our codebase.

Are there other hidden Go gems/guides that exist that I happen to not know about as well?


Yes, definitely dozens of pretty good online resources. Here are 3:

- Tour of Go: https://tour.golang.org/

- Practical Go: Real world advice for writing maintainable Go programs, by Dave Cheney: https://dave.cheney.net/practical-go/presentations/gophercon...

- Uber Go Style Guide: https://github.com/uber-go/guide/blob/master/style.md



yes a talk from one of the big wigs in GO. Think its called 'Rethinking Conccurency' ?


A bunch of other docs I've looked up number of times and found handy:

- Slice tricks: https://github.com/golang/go/wiki/SliceTricks

- Rest of the Golang Github Wiki: https://github.com/golang/go/wiki#additional-go-programming-...

- Spec - very approachable: https://golang.org/ref/spec

- Standard library source code - clean code with lots of idioms to learn from: https://cs.opensource.google/go/go/+/refs/tags/go1.17.1:src/


Hopefully, a ton of the slice tricks will soon be replaced with "container/slice" (although you'll still need a few)!

[1] https://github.com/golang/go/issues/45955



Cool I had never seen this before. All seem on point, but I have light disagreements with this one:

> The basic rule: the further from its declaration that a name is used, the more descriptive the name must be. For a method receiver, one or two letters is sufficient. Common variables such as loop indices and readers can be a single letter (i, r). More unusual things and global variables need more descriptive names.

In the tighter parts of some algorithms I love seeing more descriptive variables (obvious exclusions, as noted, are things like i, j, x, y). It's very common to see double letter variable names that become hard to track (plenty in the go source as well).


Shameless plug on my opinionated [Go Styleguide](https://github.com/bahlo/go-styleguide) – let me know what you think!


Since you asked for feedback: Good style guides contain rationale, weigh pros and cons and then make a recommendation. For example, you say "use an assert library", but why? You make an assertion (har har) about consistent error output, but I've seen style guides that use the exact same rationale to ban assert libraries.

This reads more like a list of things you wrote to remind yourself of, than something intended for consumption by other programmers.

It's fine to be opinionated, but a normative document must explain itself, so people (1) learn something and (2) know when a rule isn't really a rule and (3) can make an informed decision to change something in 5 years when you no longer work there.


Good points, thank you!


I feel overall those are very reasonable rules. What I do differently:

1) Add context to errors - ok but sometimes the parent caller doesn't expect or wants that.

2) one-letter variables: I use them more, for example k,v for key,value loop through the map, n can be counter, or x is some value inside the loop.


Yeah I actually agree, will make them sound less like a must. In early Go times people would use one-letter variables everywhere which was horrible.


Not a Go programmer but Dart also has something like this for a while which has been a pleasure to use, glad to see others can get the same benefits now.

https://dart.dev/guides/language/effective-dart


> Go is a new language

It’s been 12 years next month. Might be time to change that?


> It’s been 12 years next month.

Since being announced, ~9 since 1.0.

For comparison of some other currently popular languages (above it on the Tiobe Index), time from 1.0 (or something similar where typical modern versioning isn't applicable) is: Python 27 years, JavaScript 24 years, C 43 years (from K&R, realistically might count 1.0 earlier), C++ 36 years, Java 25 years, C# 19 years, Ruby 19 years, VB.Net 20 years, classic VB 30 years, Groovy 14 years.

Go is a relatively new language. Not quite as young as Rust, but young for an industrially-popular language.


As languages go, both its age and spread still justifiably qualify it as "new"


> Why is there no pointer arithmetic?

Safety. Without pointer arithmetic it's possible to create a language that can never derive an illegal address that succeeds incorrectly. Compiler and hardware technology have advanced to the point where a loop using array indices can be as efficient as a loop using pointer arithmetic. Also, the lack of pointer arithmetic can simplify the implementation of the garbage collector.

<3


> Compiler and hardware technology have advanced to the point where a loop using array indices can be as efficient as a loop using pointer arithmetic.

I literally refactored some code from index arithmetic to pointer arithmetic and bought a 10% increase in performance for an (admittedly silly) numerical performance contest on Friday, so I'm not convinced either LLVM or the "jit compiler that reorders operations inside the x86 chip" are that smart yet.

That said I would not doubt that most modern languages convert iterables that look like index arithmetic into pointer arithmetic, but if they do so I would suspect it's at an IR level above the general compiler backend.


In which language?

In Rust iterators are actually the fastest way to safely iterate over arrays, because they will elide bounds checks. If you use `array[index]` you will get a mandatory bounds check (branch) on each access. Using pointer arithmetic would avoid that, but is unsafe in Rust for obvious reasons.

In C I would assume indexes and pointer arithmetic to have exactly the same performance, since the `array[i]` is the same as `*(array + i)` and there are no mandatry bounds checks. Might be interesting to move your code in godbolt and see what actually changes here.


Ziglang. We looked at it Tracy so we confirmed that having a counter and a pointer is bad relative to having just a pointer. We don't want bounds checks because they are 1) expensive (remember, silly numerical computation challenge) and 2) we overshoot the end of the array anyways to be able to aggressively unroll as many writes as possible at compile time, don't worry, we make sure to allocate extra slots that are mathematically guaranteed to be sufficient beforehand.


Yes. And on the more general sense, I noticed Go tried to remove everything which can obfuscate the code and make it harder to read, or become junior-unfriendly. At least that's my impression.


In general, that seems like greatest improvement that can be made, because they optimized language itself. They made it efficient, but still readable, so you can just read or write the code without much/any Googling. I never really understood why simplicity was considered especially "junior-friendly", while actually more experienced people can benefit the most from it. It's much easier now to just visualize algorithms used in code, without introducing tricky notations. It's also much easier to just write your code, when you finally have syntax and stdlib that will cover majority of typical cases, instead of language, where you need to either reinvent the world or use over-engineered libraries to solve common problems.


Contrary to this mindset of simplicity over everything else, abstraction is the very bane of existence of the whole field. While there is indeed accidental complexity we should strive to avoid, there is essential complexity which can only be managed through proper abstractions.

So that over-engineered library that actually solves the problem is the only real productivity benefit we can achieve, since languages don’t beat each other in productivity to any significant degree (see Brooks).

So while one may not want to go into macro-magic with a semi-junior team, no abstraction will just as much kill a project because LOC will explode, it will have tremendous amounts of buggy copy-pasted/multiple times implemented code, etc. And the few empirical facts we know about programming is that bug count correlates linearly with LOC.


> I noticed Go tried to remove everything which can obfuscate the code and make it harder to read, or become junior-unfriendly.

That is unsurprising

> They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.


The irony of that statement is that a language whose version 1.0 was pretty much Go in 1996, is now seen by them as a PhD level language.


If you take the quote out of context, maybe.

What he was saying from the previous bit is that they wanted their new language we now know as Go to be familiar to those who have learned Java, C, C++, etc.

This is opposed to it being derived from, say, Haskell, that, while brilliant, few have familiarity with.

It’s not ironic that Go is similar Java, especially in its earlier days. Those similarities are stated as an intent of Go’s design.


I suspect Rob would say that Java 1.0 was a better language than today's Java.


Limbo was being positioned against Java, so I have my doubts.


I'm not sure how that has a bearing on how someone would the versions of Java.


Which language do you mean? Python 1.0 seems to have come out in 1994 (https://en.wikipedia.org/wiki/History_of_Python#Version_1).


OCaml came out in 96. That would be PhD level in comparison. As for Python, it's interesting how it was considered a simple, easy to use language that focused on doing things one way back then. But now nearing version 4, it's turned into a complex language with many ways to do things. Makes one wonder how long Go will be able to hold out for. JS has gone the same route as Python. I guess Scheme managed stay simple, but it was never that popular. OTOH, It's cousin CL is complex.

To paraphrase Stroustrup:

There are two kinds of languages: the simple ones (Scheme, Smalltalk, early Python and JS), and the ones everyone uses.


I was referring to Java.

OCaml is not on the languages referred by Rob Pike.


Nothing on my comment was about Python, and Rob Pike mentioned others, like Java.


Actually you _can_ do pointer arithmetics. It's just not straight forward, and the behavior is "unefined" in the sense that even if it works today, it might break in future versions.


Yes, but it’s parked behind “unsafe” and pretty sure it will still work in future versions…


I really liked using Go until I tried to publish a module with a version greater than 1.

What kind of packaging system chokes on v2 of a module? What kind of language recommends[0] with a straight face that you just duplicate your entire codebase into a v2/ subdirectory?

The module system is so obtuse and the documentation so poor that I will probably avoid go and choose other languages from now on. It really feels like they came to release time and someone in the meeting said "hey guys what if someone wants to release a new major version of their library?" and everyone else in the room had an "oh shit" moment because their amateur language design didn't address that corner case.

[0] https://go.dev/blog/v2-go-modules


> What kind of language recommends[0] with a straight face that you just duplicate your entire codebase into a v2/ subdirectory?

Not Go at least. You misunderstood the post completely.

Here is an example of how you are supposed to change the import path for subsequent major versions (that is, backwards incompatible versions): https://github.com/jackc/pgx/blob/3599f646293c1b0d381214ab26...

A one-line change to your modfile and the import path is changed.


That's incorrect. The go blog recommends copying your entire project into a v2 folder. They do suggest you can use tags/branches instead, but it will cause problems with GOPATH projects.


It recommends this, sure, but there's no need for it. Doing so would be purely for backwards compatibility with older versions of Go tools that don't understand modules and rely on GOPATH. If you don't need to support older tools, then there's absolutely no need to copy anything.


Maybe I'm missing something, but I think there is a need. If your code imports dependencies A & B and they depend on C at two different versions, how can you handle this unless C is structured with versioned dirs?


Go does not map the module path directly to a file system path. You can import v1 and v2 at the same time because they have different module names. In other words, this is allowed:

  $ go get github.com/linkedin/goavro@v1.0.5
  $ go get github.com/linkedin/goavro/v2@v2.10.1
Then:

  $ go list -m ... | grep goavro
  github.com/linkedin/goavro v1.0.5
  github.com/linkedin/goavro/v2 v2.10.1
Version 1 has a go.mod that declares:

  module github.com/linkedin/goavro
and version 2 has a go.mod that declares:

  module github.com/linkedin/goavro/v2
but both have their Go files located in the root of the repository.


GOPATH is luckily soon dead.


Which is kind of remarkable. Python still seems to try to support every package manager and format that ever existed, and that’s part of why packaging in Python is so awful. Nice to see a language that can move forward with such a major change in such a short time and with relatively little pain.


Python is a lot older and a lot more widely used, so it’s not as easy to just change things.


Python3 is pretty close to Go in age and was a purposely compatibility breaking, so it could have easily changed things at that point and they definitely knew about the problem at that point.


The difference between 2 and 3 is way smaller than people make it out to be. I migrated a large Python 2.7 project to Python 3.8 and it wasn't a particularly painful experience. I feel that way more effort should have been directed at making 2 and 3 compatible.


It took more than a decade for the ecosystem to migrate. IIRC, a lot of effort went into making them compatible, but some things are just fundamentally incompatible and also very difficult to automate (e.g., string encodings).


I learnt Python in 2013, when Python 3 still had some serious growing pains. I switched from 2 to 3 almost overnight circa 2015 or 2016.

The last few years I cannot help but feel they were very close to making 2 and 3 play nice together, but the idea ironically lost traction because of how much Python 3 adoption had accelerated.


Python 3 is easy for a Python 2 programmer to pick up, but that doesn’t mean it’s easy to port large systems from one to the other.


Yeah, that's the thing about having so little existing software, especially stuff that has been essentially finished for a decade or longer.

If Go is successful for any significant period of time, that will change.


Go is pretty established. By the time this module change was made, a huge swath of the cloud ecosystem had been written in Go, including core components like Docker, Kubernetes, Terraform, etc. It’s also the default language for which Terraform and Kubernetes extensions are written, as well as the default language for container infrastructure (i.e., container runtimes, etc).

I think Python’s packaging story is a bummer in part because Python leans so hard on C which doesn’t have a standard build system or a sane approach to building or packaging.


> Go is pretty established

Sure, and when it has been “pretty established” for a couple of decades, it’ll have a chance to have the community weight of important legacy projects that Python has.


This is a canned argument for any criticism of older programming languages.

First of all, it’s not much of a consolation to users that the packaging and performance are bad “because the language is old”, and secondly Python had already had half a dozen significant changes to its packaging format and ecosystem by the time it was Go’s age (distutils, cheeseshop/pypi, PEP241, setuptools, eggs, easy_install, etc).

Note that by this time, Python’s package management was far more complex than Go’s and it was also far more painful (and indeed, still is far more painful). Note that Go deprecated GOPATH when the language was already 9 years old, at a point when it was quite a lot older and more widely used than Python’s myriad package formats which are still supported today.

We can say “No fair! Go has the benefit of hindsight!” and true enough, but again as a user, I don’t much care—I just want the tool that’s going to solve my problem with the least headache.

And this isn’t even touching on the Python 2-3 saga, which began when Python 2 was merely 8 years old and lasted for more than a decade. If you want to argue that we should use the Python 1.0 release date for our comparison, then fine—call me in ~4 years if the Go ecosystem is on the precipice of a suite of breaking changes which constitute an existential crisis. My money says that wonky happen.


> This is a canned argument for any criticism of older programming languages

No, its a an argument that applies to the ease of making disruptive changes to languages with less legacy in the ecosystem (age is a factor, but not the only one; its also a reason why Ruby, which was a similar age with the disruptive 1.8 to 1.9 transition had an easier time than Python did with 2 -> 3).

It doesn't apply to all criticism, or even to “older languages” generally (at least not equally, though survivorship bias means it is likely at least somewhat applicable to older languages that it is interesting to discuss in the first place.)


Fair enough, but I don’t think we can argue that Go had considerably less “legacy” during its modules transition than Python had during its various package management iterations. I think Go is just quite a lot simpler than Python (or Ruby for that matter) especially in how it distributes software (just source code, minimal dependencies on C, etc) which makes package management a lot easier.


> A one-line change to your modfile and the import path is changed.

Another detail: since your import path change also affects internal imports between packages within your module, you also need to fixup a bunch of .go files in your repo (worst case all of them).

I still think it's not a big deal (it's all pretty straightforward non-magic). Just a small nitpick that it's not just one go.mod change.


Yes, any consumer must pick the new import path, although that is true regardless of strategy.


> Not Go at least. You misunderstood the post completely.

What? From the post:

> The recommended strategy is to develop v2+ modules in a directory named after the major version suffix.

> …

> We recommend that module authors follow this strategy as long as they have users developing in GOPATH mode.

Although with a reservation, they clearly do recommend it.


"As long as they have users developing in GOPATH mode."

It's documentation written when GOPATH was still a widely used for development.

I think nowadays it's less of a concern to have a module first code base with GOPATH unsupported. It's nicer to use a V2 branch. Or delegate V1 to a V1 branch and maintain V2 in master.


I’m not disputing that things may have changed since the post was written. But the comment I responded to claimed that the linked post didn’t recommend copy-into-subdirectory, which it clearly does.


How is a branch nicer than a subdirectory? Off the top of my head, it seems like backporting bugfixes might be easier using a branch (if you're lucky, you can just cherrypick the commit and it will work) - any other reasons?


Thanks for this informed comment. A timely reminder to me to read TFA, because I took parent comment at face value on first reading.


I've been a long time golang user and while I like to give the golang maintainers the benefit of the doubt, I have also struggled heavily with go modules.

I think my main issue with it is that it does things implicitly without you telling it to. Running go commands will modify the go.mod file all the time, and this is largely invisible to you. As a result, your build is basically a moving target and seems to lose all reproducibility. It also makes the entire thing very opaque and hard to understand at times, since you need to understand the implementation details of go modules to fully know what's happening.

In comparison, take godep. While not perfect, it simply did what I told it to, and only modified the dependency state when I gave it a command. This meant I knew exactly what was happening in my build system, and could update dependencies when I chose to.

If anything, go modules is too smart for its own good... And I think this is a big problem for a build or packaging system. It needs to be explicit and simple. It's odd it was designed like this, given the language itself strives for these ideals.


I'm on the other end of the spectrum for this one. Breaking changes in a stable library (i.e. has reached v1) should be totally avoided.

Statically typed languages, fancy features like intellij's structural find and replace, they just don't cut the pain and annoyance of fixing an unexpected change in behaviour of a library in a large code base.

It's just horrific drudge work, utterly unenjoyable, thoughtless labour that needs careful attention.

The go module versioning is clearly designed by someone who has worked on a large many developer code base before.

I can have V2 and V3 linked into my binary at the same time - a blessing for incremental refactoring.


I think you have named the one good argument for this scheme: to allow a go module to exist in several versions in a project. Before the module system, this was just not possible. Which, in general, is a good idea. Programs today tend to have a too big dependency tree anyway, at least there should be only one copy of a module in the system. However, this is an idealistic view and depending on the number of dependencies, at some point it might not be possible to maintain this idea. npm on the other side goes totally crazy, every module import is local, you can have hundreds of copies of the same module in your system.

The go module approach seems to be like a middle line. It puts a lot of pressure onto module maintainers not to break things within a major revision. They seem to be aware that this is not always guaranteed, that is why the module system not only tracks the major revision, but the exact subrevision of the module. But at least it can enforce, that changes which absolutely do break backwards compatibility are on differen major revisions. Of which you are allowed several inside your project.


I really like golang, and have used it for years. But I really dilike gomod. It's not uncommon for using it to be a frustrating experience. It makes me glad that most of my work these days is not in golang.


The reason I like this is because of one of the principles that Go espouses pretty heavily - never break backwards compatibility. If I install v1 of your package, I want it never to break. You can always improve performance, add more methods, etc, but the normal Go upgrade does not involve any code changes at all. The tools formalise that for modules as well, and encourage building a new module altogether if backwards compatibility is being broken.

If it’s not being broken, it’s not a major version update, and major breaking updates are better off as new separate modules.


Except in practice in all real software (including the go language!) minor and patch versions break importers all the time in small ways, AND major versions are usually used to indicate significant new features, not just breaking changes.

So your claim is reduced to ‘I don’t want v1 to break my software in ways that the dependency finds important enough for a major version’.

Not quite as reassuring is it?

Plenty of security fixes and undefined behaviour fixes break importers all the time, and often importers and exporters disagree about the severity of that. Sticking with v1 will not save you that pain.

I preferred the situation before where there was strong social pressure never to break the main branch.


This is a cultural issue.

It's not inevitable that by developing a module over time that you constantly break compability AND you want all people that depend on you to upgrade version.

Go encourages the opposite approach.

If your API is not stable, don't publish a package for public consumption, or, make sure to clearly mark it as unstable, so people know what to expect.

One of the absolute best features of Go is stability. If I have a project written a few years ago, I can always run it against the latest Go compiler and know that it will still work.

Why does this matter? I'll give you a counter example.

In 2016 I wanted to work on a mobile application project and I just picked React Native because I heard it lets you easily create cross platform applications.

I worked on it for a few weeks, then left it for about two or three months. When I tried to pick it up again, I installed the latest versions of node and npm and upgraded all packages.

The project stopped working. I had no idea what went wrong. The error messages were obtuse and several layers deep. I tried to debug it for weeks to no avail.

As a result, I abandoned the project and built just an Android version using Kotlin and Android Studio.


> I installed the latest versions of node and npm and upgraded all packages.

Laughter of recognition from this side. I am working on an application that has a tiny bit of Web UI. Basically one button, "Yes, I want this thing" and some text that guides the person staring at the button to decide if they in fact do want this or don't. I didn't write any of that code, and my Javascript knowledge is pretty rudimentary, but hey it's just a button right?

On Monday I'm sat (virtually) with the guy who wrote that part, pair programming the dependency injection stuff so that this button can actually cause stuff to happen. It builds and runs fine on my machine, we make lots of progress. During the week I focus on other parts of the system (including a red herring where we're convinced for an hour we have a DST Root CA X3 expiry issue but it's actually a syntax error in a Puppet file on the in-house node classifier), but on Friday we're ready to check the current state. However the guy who wrote the button code is out. No problem though right, I understand the larger system, just hit build?

Nope. I get a JSON parsing error, which, it turns out, is how npm or node communicates "I am the wrong major version" because of course it is. Incremental improvements to a single page with a button on it had resulted in needing a newer major version in less than one week.


It certainly is a cultural issue which is why it shouldn’t be enforced by tooling!

I also completely agree about the value of stability, which is why I don’t like this change which paradoxically encourages more churn (normalises v2,v3 etc for breaking changes, normalises large breaking changes and rewriting apis).

All APIs are slightly unstable in some sense (even adding fields can be a breaking change), so the aim should be to minimise version bloat and breaking changes and provide stability for importers, which the go language has done an amazing job at for the last decade.

To give you a counter-example the sengrid api in go has had multiple large breaking changes/rewrites and has used this v2/v3 etc scheme. It’s still a horrible experience as an importer and I’d rather they remained stable instead of introducing massive churn with the cop-out that the old unsupported version didn’t change.

In dependencies I import I want a perpetual v1 which doesn’t change over a decade and just slowly improves, not v31 - new improved rewrite this year! Which this rule explicitly encourages.


> All APIs are slightly unstable in some sense (even adding fields can be a breaking change)

Of course adding fields is a breaking change. It has never ceased to astonish me how little most programmers understand compatibility, I remember wrestling with libpng getting them to understand that e.g. no, "tidying up" the order of fields in a public structure you've published isn't safe back in the 1990s before they came to god and actually provided sane APIs which hide internal data structures.

Now, it's true that Hyrum's Law means any detectable change, even if it was never documented, will break somebody if there are enough people depending on your system. That's a big deal if you're the Linux kernel, or the HTTP/1.1 protocol, as it means even trivial "this can't break anything" fixes may break stuff. For example, as I understand it Google once broke search in Asia by changing an internal constant so that a hashmap would re-allocate (thus invalidating pointers to items in the map) slightly earlier. C++ code relying on pointers to just magically stay valid across inserts was completely wrong before they broke it, but anybody staring at a 500 page on google.com doesn't care why it broke they just want it fixed.

Most of us needn't much worry about Hyrum's law. That would be, as an old boss repeatedly told us, "A good problem to have" because it means you're having enormous impact.


> In dependencies I import I want a perpetual v1 which doesn’t change over a decade and just slowly improves, not v31 - new improved rewrite this year!

> Which this rule explicitly encourages.

I agree with the first part (I'd much prefer having a MyFunction2 in a package if necessary, than a breaking change to the package itself).

But my takeaway from the design was the exact opposite - that it discourages breaking changes, by making them a little less convenient, and that the purpose of the feature was to allow major versions to simultaneously exist to avoid build issues [1]. I think I got that impression by reading the long articles that the team lead put out about the design decisions [2], and observing that the standard libraries themselves rarely break compatibility.

Unfortunately watching various discussions some people do feel encouraged to put out new major versions since upgrades are "solved" (even though that's only from a build system perspective, you're still causing client churn).

[1] https://research.swtch.com/vgo-import

[2] https://research.swtch.com/vgo


Two problems here:

Major version nos are not in practice used for breaking changes but more usually for new features.

The unintended consequence is to make it more acceptable to add breaking changes and force upgrades to stay current.

It’s not a huge deal though and I’m interested to see how this decision works out. Personally I think it is none of the package managers business.


The problem here is that we add too much extra meaning to version identifiers. In semantic versioning, if you're at 1.38 and want to release a cool new version with many cool features and a much faster engine, while still retaining backwards compatibility, you're stuck with releasing a 1.39. If you call your new version 2.0, you signify that it's backwards incompatible.

We should have two differently-looking version identifiers, one for compatibility-tracking purposes and one for signifying the scope of changes to humans. The compatibility version numbers could look like 2ah4, where the first number is the major version, the letter (or series of letters) is a minor, increasing alphabetically and going to "aa" after "z", and the last number is a patch.


I think they are always fuzzy and that’s fine, as long as tooling doesn’t make incorrect assumptions about meaning.

Different exporters and importers have different stability requirements and expectations - it’s negotiated.


It looks like a subdirectory is obit recommended if you want to support get old versions of the toolchain. Using branches or tags is probably the way to go these days


Yes this was a regrettable decision which causes lots of unnecessary pain and confusion and I wish they’d reverse it.

You can just use tags, no need to copy files, but IMO it should not be the package management system’s job to enforce draconian rules about semantic versions and import paths. Otherwise go mod seems pretty straightforward and sensible to me though.


This was just to support backwards compat with GOPATH for the brief time that the ecosystem was transitioning to modules. Now that GOPATH is rarely used, it’s not necessary.


I was talking about the requirement for v2/v3 in paths on major version as a regrettable decision, not the particular choice of duplicating dirs (which they recommended initially and as I pointed out can be sidestepped with a tag).


Ah, my mistake.


> and everyone else in the room had an "oh shit" moment because their amateur language design didn't address that corner case.

I doubt that’s how it went down. They tend to be very confident in their design decisions. Whether that confidence is warranted or not.


[flagged]


everything is political nowadays.


When were things not political?


So why did we need go when Java already existed?


Try writing e.g. a TLS proxy in Java and then in Go, and get back to us.

For byte arrays, sockets, encodings, cryptography, etc. the stdlib in unparalleled.


Not arguing that Go is a great fit to these niches, but imo java has a really great standard library, including these areas.


I don't know, why did we need Java when CL/Smalltalk/etc. already existed?


Because Java has become an unpleasant to code in. Do you really think what you like is what everyone else likes?


While I agree with sibling poster that modern java is nothing like the old times, are we seriously comparing it to Go with manual error handling and the like?


If you are still coding in Java 5 then yes it is unpleasant. I have seen a lot of java 5 like code in Java 8 environments and hence the unpleasantness. However, even though I am still upskilling myself to use Java 11 and later, I am finding that my Java code is less verbose and more efficient.


I sorely miss some kind of functional streams api in Go when switching between Go and Java. for-loops make the equivalent code absolutely awe full, plus it’s not automatically parallelizable by replacing .stream() with .parallelStream(). IIRC, some Go luminary decided map, apply filter etc. we’re no good, so I hold little hope of something like Java streams API in the future.

Maybe someone can recommend a third party library.


I question the relevance of newer Java versions. Established codebases don't like to upgrade and new codebases are likely to use trendier languages.


Are we looking at the same languages? Java is far more a productive language with generics, lambdas, switch expressions, pattern matching.


Java (and C# and C++) have become kitchen sink sink languages. Everything is just thrown in. That makes it less productive for me. I've decided I don't want to carry around that much in my head all the time in order to read somebody else's code. I switch languages a lot these days.

Go is getting generics in (hopefully) in a few months. Some features are missing but most don't cause much pain and can be either simulated or worked around. Error handling is its biggest failure, but outside of that I find is at least as productive as other compiled languages in the same space.


> Java (and C# and C++) have become kitchen sink sink languages. Everything is just thrown in.

This is far from the truth when it comes to Java. Java doesn't add features willy nilly. They are thought out, and in general implemented better than in other languages. Java's WHOLE philosophy is to move slow and let other languages experiment with features so the kitchen sink doesn't get thrown in because Java has _actual_ backwards compatibility promises and has to support those features for eternity.


> Java is far more a productive language

Since when is productivity a priority for most developers? Seems like 2nd or 3rd fiddle based on HN sentiment these days.


Because they're two different languages, tackling different problems, with entirely different sets of benefits and costs associated with their use.


That’s not enough to justify the (10s, 100?) millions of dollars of investment. Java is faster and safe and can do everything go can do. C++ can be even faster (at least there are no pauses from a gc) at the cost of some safety.


The fact that Go has gained so much adoption probably justifies that investment. They are completely different languages and much of it is personal preference.


> Java is faster and safe and can do everything go can do.

Including generating static binaries without the use of requiring to download a runtime?


Yes, there were AOT compilers availbe for Java since 2000. But more recently, Graal creates very nice binaries.


Yup, with everything else being equal, given the choice I will always use the project written in Go.


Oracle.


Which increased the pace of development of Java and fully open-sourced it and is overall being a really great steward of the language? That Oracle?


No, the other one. The Oracle we hate and have hated for more than 20 years now.


Deployment story is a tentpole feature, of which these two have very different ideas.


Go will loose significant market share as soon as Java and .NET have a mature and usable AOT compiler.


Soon. Still, .NET Core is already faster even without AOT on most cases: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I’m worried it won’t. There are an awful lot of people enthusing about how novice-friendly it is, who don’t seem to worry about wasting experts’ time rereading noisy boilerplate (which an expert should have abstracted out and generated instead).


When I look at a language I could not care less about the selection of keywords, the syntax used for a loop or the naming conventions. What I look for are the means of abstraction that the language provides, and Go gives nothing more than C.


> Go gives nothing more than C.

Huh? At the very least Go has a safe, robust string abstraction which C does not have. Not to mention slices, maps, range based for loops, a "batteries included" style standard library, and more...


Go has strings, slices, maps / hash tables, channels, and interfaces. C doesn't have any of those things. (It has quirky string libraries, and string literal syntax, and that's about it)


That's an explicit feature and not a bug for the design of the Go language. They don't want it to be a kitchen sink of a language with everything for everyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: