Hacker News new | past | comments | ask | show | jobs | submit login

One should be careful about these mega-interfaces that define an entire subsystem of your application (DB in this case). They are very painful to maintain, and add almost no value. Interfaces should always be defined by the consumer of the interface, not by the producer. And interfaces should never be used until there is a clear need. CodeReviewComments covers the basics: https://github.com/golang/go/wiki/CodeReviewComments#interfa...

I wrote an insanely long rant about this with thorough examples using the structure of the article, but it was too long for HN. So I put it on my blog instead: https://jrock.us/posts/go-interfaces/




Fair criticism. I'm the author, and I hate using interfaces like this, but didn't comment on it probably because I got used to it for this kind of thing.

The only reason why I defined inventory.DB (https://github.com/henvic/pgxtutorial/blob/main/internal/inv...) is to be able to test. Otherwise, I'd have skipped it.

I haven't read your article yet, but will do so later and get back to this subject and tell my opinion by the end of the day (on vacation, and need to hurry to catch the train!).


Testing is good, but probably not a good enough reason to add unbounded ongoing maintenance costs. The problem is that putting everything in one interfaces forces you to implement the test version of DB every time you add real functionality to the real implementation. That gets pretty expensive in terms of time, and forces you to do work that's not necessary. (You could always test using the real implementation, and that's the approach I'd recommend starting with. When it gets too slow, look for an alternative.)

My blog post covers what unit testing is like with your interface, with a consumer-defined interface, and just putting some test data fields in the DBImpl struct. The latter is my preferred approach. It took me a long time to get there. I was working at Google writing some filesystem access code and wanted to test it. I designed it as interfaces so I could have a real implementation and a test implementation, and a member of the Go team reviewed the code and told me it was absolutely inappropriate to use interfaces there. I disagreed at the time, but I've slowly come to terms with this style. Separating out production and testing is "clean", but that cleanliness comes at the cost of a mental abstraction burden (every time you read the code you're confused about where the meat lives) and unbounded future maintenance costs (every time you want to add a new feature, you have to edit two files).

I don't think having a mock framework do codegen to make generic mocks is that useful. Just keep the exact behavior close to the test. A working test harness for a particular unit is something small enough to write out in an HN comment, and keeping the test code close to the test (and not wrapped up in some complicated framework) makes it much easier to enhance and debug the tests. (I've had Mockery forced upon me before, and I'm miserable every time I have to interact with it. Have to get the right version. Have to update an interface after writing the real implementation. Have to regenerate the mocks. Then I have to refer to the docs to do even the simplest tests, and those tests end up not doing what I want (instead producing something useless like "assert that function was called with the right args"). It's a chore that isn't worth the cost.)

I think the underlying psychology behind the mega-interfaces is probably that we come up with an idea for what the interface should look like, and want to write it down somewhere. Then we can disengage the brain and start filling out the method bodies. That is not a good enough reason to add a layer of abstraction to the code; you can always type out 'func RealImplementation() { panic("write this") }' if that's the real reason.

My view shifted when I was trying to use the OpenTelemetry library (circa v0.6.0) to emit Prometheus metrics. I wanted to set the histogram buckets on a per-metric basis. I chased 100 layers of indirection (every function took and returned an interface, so you'd need to have like 5 code windows open to debug the flow of data between the interface level and the implementation level), and never figured out how to do it. I think I determined it wasn't possible. It was at that moment that I decided "wait, I think everyone is using interfaces wrong", and then found CodeReviewComments telling me "yup, it's wrong".


For testing I wrote https://github.com/jamesrr39/go-mockgen-tool to generate mock implementations of a given interface. It's a different approach from the normal reflect and interface{}... heavy libraries. Might be interesting!


Quite reasonable! I think if you keep your interfaces small, you might not need it, but if you do have a couple of beefy interfaces, then this is a lot more pleasant than using a dedicated mock framework. A function is a great way to mock a function -- you can do anything! ;)


Looks interesting! Reminds me impl[0], which comes bundled with `vscode-golang`.

[0]: https://github.com/josharian/impl


I read your article and it has good points. I've added a section pointing to it just after I talked about interfaces, so other people can read it too.


Thanks for giving it a readthrough! Just wanted to mention that even though I'm critical about this one thing, your article as a whole is superb. If people start their Go projects using pgx, tern, and have tests for their database functionality, they're going to be set up for a lot of success. The interface thing is just a minor annoyance I have because I see so many people doing it. (I'm sure my past self has done it more times than I'd like to admit.)


I agree with the basic idea in your blog post -- that interfaces should be defined by the consumer, and be small -- but I think it's a bit dogmatic: "it’s wrong". :-)

For one thing, it's easy to work around the issue of "pasting every single function signature into this test file, and making it panic("not implemented") or something". You can simply embed the DB interface in your test struct, and only override the method(s) you need. For example, let's say the DB interface defines 10 methods, but our function under test only need one, GetThings(). We do this:

  type testDB struct {
      DB // embedded field named DB, so testDB implements DB
      things []Thing
  }

  // but override GetThings method
  func (d *testDB) GetThings() ([]Thing, error) {
      return d.things, nil
  }
The embedded DB field, which is a field named DB of type DB and is nil, pulls in all the methods of that type so testDB implements the DB interface. When you construct a testDB you just leave the DB field nil -- if any of the not-explicitly-defined methods are called, they'll panic with a nil pointer, but that's okay, they would explicitly panic in your test implementation anyway.

The other thing is simply pragmatism: at some point it gets tedious to define all these tiny interfaces. Do you do it at the handler level, where each db sub-interface only has 1 or 2 methods? This blows out the number of interfaces. Or do you do it at the server level, where the db interface has all the methods (what you're opposed to)? Or something in between: have server "sections" grouped by logical feature, such as authDB, projectDB, and so on. Each might have 5-10 methods, and to test each sub-server you'd either do what I've suggested above with an embedded interface field, or just implement a single in-memory test db for each db type.

I do like what you said about interfaces obfuscating the code (it becomes more abstract and hard to follow and navigate), and that testing on the real db if you can is a good idea. Creating a new PostgreSQL db for every test seems like it'd be very slow and unnecessary, though. Why not just one db for each test run?


Thanks for writing this up! This is something I've struggled with in my Go projects, and I didn't know a good way to solve it. Your writeup and the reference to the go wiki was very helpful.


oh, I also have defined database.PGX (https://github.com/henvic/pgxtutorial/blob/main/internal/dat...), which is a little more scarier. I'll get back about that too, but this one is perhaps even more useless, and there might be a better way to solve the issue I want to try to avoid with it.


Yeah, I think in that case, just make a decision as to whether or not you're going to use Pool or Conn in your application code. If you change your mind, just refactor; it's not worth maintaining the interface just to open up the opportunity to change in the future. Pool is probably the right choice forever.

If it's for testing, I'd just skip mocking that out and run the queries against the real database, like you do in the article. Production is going to connect to Postgres and run `select * from foobars`. Might as well run `select * from foobars` in the tests, so I'm not surprised when it doesn't work in production.


It wasn't for testing. I wanted to expose only a limited set of *pgx.Pool that was "safe" to be used from any place. For example, without the Close() function that might be called during gracefully shutdown. I wanted to avoid exposing such things as kind of verification process, and that's a little bit silly.


That's a good reason. I think I would prefer adjusting my linter to detect the unsafe uses. That way, if someone really wants to, they can "// nolint:unsafe_db See discussion in ISSUE-1234".

(I haven't done this, but it does seem very achievable for this use case. See https://disaev.me/p/writing-useful-go-analysis-linter/ and https://golangci-lint.run/contributing/new-linters/)

I didn't look closely, but I think that just wrapping the native struct in an interface without the method someone wants to call will just lead them to `wrapper.(*pgxpool.Pool)` and calling the unsafe method anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: