As a very opinionated user of Go for ~6 years now...
interface{} is almost always a code smell. I can judge a book by its cover with a grep and line count for interface{} and reflect. If it's more than 20~30 for just about any small-medium size code base I dig deeper and look at why. Usually it is people being lazy and refusing to accept what is my twist on "idiomatic Go": a little bit of copying is better than a whole lot of abstraction[0]. Also devs telling me interface{} and type switches make it polymorphic are the ones who will always complain yet not leave the Go ecosystem. Hopefully generics in 2 solve this for everyone.
panic is great when you really want to crash. There are times to do this. I have only used it once in a batch job runner.
init() is a sin ESPECIALLY when you combine init() with the `.` (dot) imports. Pulling packages around for their magic side effects suck. Also the lack of some library maintainers to acknowledge init() can be poisonous... [1]
Handling large arbitrary JSON objects where I need to operate on their keys but don't need to work with their values by unmarshaling to map[string]interface{}. The alternative would likely be code generation of struct definitions with JSON bindings, which is likely preferable if you already have code generation in your project, but can be an overhead in smaller ones.
It can also be used to create your own syntactic wrappers around API's which consume interface{} such as encoding/json and database/sql:
> Handling large arbitrary JSON objects where I need to operate on their keys but don't need to work with their values by unmarshaling to map[string]interface{}.
If you just need the keys, you should unmarshal to `map[string]json.RawMessage` to avoid the overhead of parsing the values into objects only to throw them away without ever using them.
Handling JSON is the one place where I’ve used interface{} and not felt dirty. Sometimes you really do need to model “anything” or “whatever”, especially when passing around arbitrary data without making assumptions about its schema.
Using chan struct{} is more efficient, and guarantees there is nothing being put into the channel. That's why you see it used on the Context type's Done method, for example.
I agree with the assessment of interface{}. However, the proverb you cite does not exist. It is: "A little copying is better than a little dependency.", which does not apply here.
Even https://golang.org/pkg/sync/#Map requires interface{}. Any code that confines itself to raw slices and maps is probably using the wrong data structures for the problem.
It's very interesting to me how bad design decisions in Go are accepted by the users with little protest. There are other things - the Go path, error handling, surprising conventions in a language with no "magic", etc.
Yes, it was created by behemoths of computer science (but more accurately in this case, dinosaurs), and they clearly have not been writing and debugging large production systems for decades now. That's why the whole thing is clearly missing the lessons learned from modern language design.
Well they did port dl.google.com from a very large C++ codebase to go. Do you have any apps you've worked on that are higher traffice than dl.google.com?
I didn't say Go is unusable - I said it's full of bad design decisions. Google has a LOT of people who are Go fanboys. It's their in-house language.
Terraform is also written in Go, and even though I refer to it to see how they manage a larger Go codebase, my question is - "so"? This is my opinion from writing Go, I have no problems pointing out what I don't like, even though at Google, I am sure, it would be unforgivable blasphemy.
Sure, instagram is a giant django app running ontop of many postgres databases. But I was pointing out that Go can be used for very large codebases running high traffic sites.
The Go path is going away. This is being replaced by modules (the feature that I actually like a lot, especially the vendoring).
The problem is, why did it take almost a decade for a modern language, at Google nonetheless, to get a proper dependency management feature?
To me it says that the Go vetting process at Google is seriously constipated, perhaps because people there just say "this does not look right, but who am I to question the decisions of Rob Pike and Ken Thompson?". And then we end up with these language design decisions from the 1970s.
Google develops everything at #head of a monorepo, so their dependency management problems consist of “please rollback ASAP, your changelist broke the world”.
Currently I am facing an issue where I am unmarshalling json into a protobuf struct. Unfortunately the json tag names are a bit different than the protobuf struct field names. I would love to be able to use struct tags here, but to my knowledge protobuf definitions don't allow this, and I don't think I can put them in after the fact. So I am stuck loading the data into an interface{} and manually pulling out the fields.
I thought of this, but I would have to do this for every single protobuf message. Also, anytime the protobuf changes, you would have to update the code.
Yes it is for the protoc generated code. It would be nice if the protobuf spec allowed for something like tags as well. The main difference is the json coming in uses hyphens, whereas protobuf<->json only allows for underscore or camelCase field names.
> Yes it is for the protoc generated code. It would be nice if the protobuf spec allowed for something like tags as well.
Have you seen Protobuf options [1]? They allow for a tag-like-behaviour, but are strongly typed. It's notably used by gogoproto [2] to allow specifying custom field types or names [3] when generating Go stubs.
Thanks, I have looked at options. I think it could work, but I think it would require writing a custom marshal/unmarshal function for each protobuf struct, even if they are essentially the same for each one. I will have to take a closer look at the links though.
I haven’t had any issue with them. I certainly like them better than Java annotations, which encourage way too much black box magic and induce too much code coupling. The fact that they’re a little awkward to use is good, nobody should have to figure out what sort of magic happens with a pile of metadata on a field.
How do Java annotations encourage that more than struct tags? If you can write metadata, you can write code that reflects on it. I don't see any structural difference other than IBM hasn't started writing enterprise WebSphere code in Go, yet.
They're much weaker than annotations and attributes. You can amend the type system with annotations (e.g. nullability detection, as well as compile time validation - check out micronaut to see an example of what they're doing with annotations).
I think the fallthrough keyword also belongs in this category. It's a surprisingly ugly feature of the Go language, and I recall an interview where two of the three language designers said its their least favorite part of the language.
This is my goto speed hack when prototyping. It always works. You don't have to worry about deep nesting. And it frees you to design the data model as you prototype. It's also less punishing than reflecting on every node in a list.
And yes, golang / json (and go / node) interactions need to evolve better developer patterns and runtime performance ;)
Say you put a single type Foo in its own package to ensure that nothing ever — intentionally or unintentionally — touches its unexported (private) fields.
If "Dot Imports" are frowned upon, it means that your other code using that type is always going to have to refer to it as the stuttering "foo.Foo" rather than "Foo". That's not great.
Neat, you win at puzzle solving! I forget that the user-level type alias feature exists... maybe since it was introduced relatively late in the Go 1.x timeline.
> Say you put a single type Foo in its own package to ensure that nothing ever — intentionally or unintentionally — touches its unexported (private) fields.
This is not a good reason to move a type to its own package. Packages are for grouping related functionality.
What? Exporting a type or field is an opt-in decision, you don't need a special package per type to do it.
If the fear is that other types or functions in the same package can access the unexported fields, that's unavoidable, making a new package per type doesn't reduce the risk.
This is a symptom of another underlying issue with golang. It doesn't let you import structs or functions from other packages, you can only import the top level package name, and then have to refer to entries in the package by the package name. Compare to Java, Scala, Kotlin, etc. where you are able to import individual classes (and methods and functions in the case of the latter two).
Putting a type into a type-specific package like that is considered (IMHO at least) unidiomatic in the first place, so this isn't a situation which can happen in practice.
It breaks the ability to Ctrl-F to find uses of the package foo. It’s surprising to other Go programmers.
If you’re working by yourself, go wild and do what you want. On teams, I’m much more likely to need to search through somebody else’s file to understand it or find something.
It also means that additions to foo might conflict with definitions in your package.
The reason this feature exists is to break circular dependencies in unit tests. So in foo_test, you might
This makes sense but it's more like general advice for why typical packages should not be import-dotted. I was talking about a very specific kind of low-pollution package that will export essentially nothing bar a single type (maybe a NewFoo constructor too). The package isn't a bucket, it's a barrier.
(I hope people move beyond ctrl-f/grepping, to tooling that has semantic understanding of definitions/references. Go go gopls & GoLand!)
If you already have the answer, why are you asking the question?
(You can’t find Foo just by looking at syntax at all, you would need to parse both the package where it is used and the package where it is defined. Using normal imports, you can get by with only parsing the call site.)
Exceptions are generally considered to not be worth the cost
____
Why is exception handling being added in 2 then? Sure, they're calling it check/handle instead of try/except(catch) because Go ought to be hip and different but if it moves like a duck and quacks like a duck...
check() and handle() have rather different semantics; it's still just error returns, just with a bit of syntax sugar.
It's also just a proposal which received a lot of feedback/criticism, and eventually they decided to go with another try() proposal which ended up being abandoned after criticism. I'm not quite sure what the current status is here.
They're not meant to be used like that (and terminate the function, not the block), so the implementation can have more overhead than try/catch. What bothers me about it is that there's no syntax for it: recover as it is looks like a dirty hack.
I've been writing my first production software in go this year, and I must say I'm quite favorable towards its error handling now, ugly as it may look. Even during testing there haven't been any crashes except in the places where you can expect them (like writing x := y.(string) when y isn't a string).
For me, the only real downside in go: nil pointers in combination with the absence of non-nullable types.
This is my experience after 7 years of Go as well. Error handling is actually fine. People make too big a deal about the error check (and Rust people make too big a deal about the compiler not forcing you to handle every error). The substantial opportunities to improve error handling involve establishing a standard, typesafe interface for getting at the relevant, structured information in an error so callers can handle things properly; however this probably depends on generics and sum types and will probably look a lot like Rust’s result types (which seems great to me). In practice these are not where your bugs are. On the other hand, sum types would actually go a long way in reducing bugs (including the nil pointer problems you mention).
You're not supposed to use struct tags, type aliases, "interface {}" or reflection either... that's a lot of features Go developers "aren't supposed to use"... Why are they here then?
Frequent use of these patterns is a good smell test that one should reconsider design choices. Are there times when usage is justified? Yes. Should it be the first / default mechanism? No. Learn the rules first, and then it will be more clear when you should break them.
I have no problem with the general rule nor a built-in mechanism for breaking them.
Simple example: if I ever have a row lock I wrap the corresponding block in recover to ensure that I never have a deadlock. But I certainly don’t wrap every function in recover...
These features do not scale so one should take care to use these language features only when necessary. Also the OP is not Gospel of Go, merely an expressed opinion.
There's no problem with using any of those features; they just come with some drawbacks and in many cases they're not the best tool. But in some cases they are, and then by all means go ahead and use them. They exist for a reason.
It's not really, because there is no straightforward way to express something like:
def f():
bar()
try:
foo()
catch KeyError as exc:
print(exc)
Calling recover() in a defer applies to the entire function, so you'll need to wrap foo() in an anonymous function. You also need more work to not use "Pokemon exceptions" (gotta catch 'em all) and recover() only KeyError
I would add the sync package to this list. Maybe not the same extreme, but if you force yourself to use channels you’ll end up writing more maintainable code.
I would have said the opposite. Channels are great on occasion, but half the time I see them used in places where a simple mutex would be much cleaner and easier to read or where a mutex could be hidden from the user entirely but a channel is used instead and has to be exposed to the user. They each have their place. There are specific things in the sync package I see abused frequently, but I wouldn't call use of the sync package itself code smell.
empty interface{} and struct tags really shine when used as a library author, or authoring a bit of extremely reusable code that’s pervasive throughout a codebase.
Think things like json encoding (entirely réflection based), or writing a set of functions that can operate on something more generic. It’s not scary and it’s not really that slow, you just need to be very particular and choosy in your application of it.
From my perspective its missing a lot of features and the culture of 'Go is perfect, features are bad!' is a real turn off. C# is one of those multi-paradigm languages so you can usually write things however you want and I like that.
A lot of the features that do exist seem to have some strange quirks. Package/dependency management comes to mind but maybe its just unfamiliar to me.
Fundamental channels and goroutines is really nice though.
Such as? Go is what Python should have been in my mind, although I would love a more expressive type system. But overall it generally replaces Python for me.
interface{} is almost always a code smell. I can judge a book by its cover with a grep and line count for interface{} and reflect. If it's more than 20~30 for just about any small-medium size code base I dig deeper and look at why. Usually it is people being lazy and refusing to accept what is my twist on "idiomatic Go": a little bit of copying is better than a whole lot of abstraction[0]. Also devs telling me interface{} and type switches make it polymorphic are the ones who will always complain yet not leave the Go ecosystem. Hopefully generics in 2 solve this for everyone.
panic is great when you really want to crash. There are times to do this. I have only used it once in a batch job runner.
init() is a sin ESPECIALLY when you combine init() with the `.` (dot) imports. Pulling packages around for their magic side effects suck. Also the lack of some library maintainers to acknowledge init() can be poisonous... [1]
[0] https://go-proverbs.github.io/
[1] https://github.com/lib/pq/pull/455